DIG AI: Uncensored Darknet AI Assistant at the Service of Criminals and Terrorists
Cyber Threat Intelligence
Introduction
Resecurity has identified the emergence of uncensored darknet AI assistants, enabling threat actors to leverage advanced data processing capabilities for malicious purposes. One of these - DIG AI - was identified on September 29 of this year and has already gained popularity among cybercriminal and organized crime circles. During Q4 2025, our HUNTER team observed a notable increase in malicious actors' utilizing DIG AI, accelerating during the Winter Holidays, when illegal activity worldwide reached a new record. With important events scheduled for 2026, including the Winter Olympics in Milan and the FIFA World Cup, criminal AI will pose new threats and security challenges, enabling bad actors to scale their operations and bypass content protection policies.
"Not Good" AI
"Not good" ("criminal" or "uncensored") AI refers to the use of artificial intelligence (AI) for illegal, unethical, malicious or harmful activities, such as cybercrime, extremism, privacy violations, or the spread of misinformation. The legality and ethics depend on the tool itself, how it is used, and by whom, as well as on the authorship of technologies that may cause harm to society.
There has been a significant increase (over 200% in 2024-2025) in mentions and use of malicious AI tools on cybercriminal forums, indicating rapid adoption and evolution of these technologies. FraudGPT and WormGPT are the most well-known AI tools specifically marketed to cybercriminals, but the landscape is rapidly evolving with new jailbroken and customized LLMs appearing regularly. These tools lower the barrier for cybercrime by automating and enhancing malicious activities.
These tools are often referred to as "dark LLMs" (Large Language Models) or "jailbroken" AI chatbots, which are either built from scratch or modified versions of legitimate AI models with their safety restrictions removed.
DIG AI enables malicious actors to leverage the power of AI to generate tips ranging from explosive device manufacturing to illegal content creation including CSAM. Because DIG AI is hosted on the TOR network, such tools are not easily discoverable and accessible to law enforcement. They create a significant underground market - from piracy and derivatives to other illicit activities.
Nevertheless, there are significant initiatives, such as AI for Good, established in 2017 by the International Telecommunication Union (ITU) and the United Nations (UN) agency for digital technologies, which promote the responsible use of new technologies. However, bad actors will focus on the complete opposite - weaponizing and misusing AI.

DIG AI doesn't require an account and can be accessed via the Tor browser in just a few clicks.

One of the key areas of concern, besides cybercrime, is how AI-powered tools like DIG AI may assist extremists and terrorist organizations.

Our analysts have conducted numerous tests of DIG AI, leveraging taxonomy dictionaries related to explosives, drugs, prohibited substances, fraud, and other areas restricted by international legislation.

DIG AI can also be used to automate the generation of malicious, fraudulent, and scam content. Packaging it with an external API enables bad actors to scale cybercriminal operations and optimize resource-intensive tasks.

DIG AI was also able to generate malicious scripts that bad actors can use to backdoor vulnerable web applications, along with other types of malware.

Notably, computationally intensive operations, such as code obfuscation (especially when processing relatively large code blocks), take time. It was observed that some operations took 3 to 5 minutes to complete on DIG AI's side, indicating limited computing resources that threat actors can easily mitigate by offering this as a for-fee premium service. At the same time, this is a new frontier of "Not Good AI" - where bad actors design, operate, and maintain custom infrastructure and even data centers like those used for bulletproof hosting, but for criminal AI to scale operations effectively, considering load, simultaneous requests, and multiple customers leveraging it.

The output produced by DIG AI was sufficient for conducting malicious activities in practice. It was used to automate malicious operations, which, in the wrong hands, could cause substantial technological and financial damage.

According to the announcement published by DIG AI author under the alias Pitch, the service is based on ChatGPT Turbo.

The service is actively promoted on the Dark Web, including on one of the underground marketplaces run by Pitch.

Banners for DIG AI were found on several marketplaces on the TOR network involved in illegal operations, such as drug trafficking and the monetization of compromised payment data, which may highlight the audience of potential end customers.

Criminalization of AI
Tools like DIG AI are designed to bypass existing content policies and filtering mechanisms in modern AI systems, which are used as safety measures. These policies are standard from an AI ethics perspective. They are intended to safeguard both users and society from illegal uses of AI by applying censorship to specific keywords and language operators that may lead to the generation of harmful, malicious, or questionable content that violates laws.
Platforms such as OpenAI's ChatGPT, Anthropic's Claude, Google Gemini/Bard, Microsoft Copilot, and Meta AI all employ content moderation systems. These systems censor or restrict content in categories such as hate speech, misinformation, sexually explicit material, violence, illegal activities, and, in some jurisdictions, political speech. The primary reasons for this censorship are to comply with laws (such as the EU AI Act), protect users from harm, uphold ethical standards, and safeguard company reputation and market access.
Legislators have proposed new initiatives such as the TAKE IT DOWN Act, which bans non-consensual AI-generated intimate images, and are creating rules regarding AI-generated child abuse material. The focus is on human responsibility for AI's actus reus (wrongful act) and mens rea (intent), with agencies such as the FBI working to combat AI-powered crime while also regulating AI's internal use within the justice system.
These initiatives are needed on the traditional Internet and service providers. However, the laws do not apply to content and services on the Dark Web.

Cybercriminals may also use DIG AI to commit brand abuse. The combination of text and graphics generation offers unprecedented capabilities for bad actors.
AI-Generated CSAM
Generative AI technologies—such as diffusion models, GANs, and text-to-image systems - are being actively misused to create illegal child sexual abuse material (CSAM). Offenders exploit technical vulnerabilities in these systems to generate, manipulate, and distribute highly realistic synthetic CSAM, posing significant challenges for detection, law enforcement, and child safety worldwide.
Resecurity has confirmed that DIG AI can facilitate the production of CSAM content. The tool could enable the creation of hyper-realistic, explicit images or videos of children - either by generating entirely synthetic content or by manipulating benign images of real minors. This issue will present a new challenge for legislators in combating the production and distribution of CSAM content.
Our team engaged with relevant law enforcement authorities (LEA) to collect and preserve evidence of bad actors producing highly realistic CSAM content using DIG AI- sometimes labeled as "synthetic," but in fact, interpreted as illegal.

In 2024, a US child psychiatrist was convicted for producing and distributing AI-generated CSAM by digitally altering images of real minors. The images were so realistic that they met the US federal threshold for CSAM. Law enforcement and child safety organizations report a sharp increase in AI-generated CSAM, with offenders including both adults and minors (e.g., classmates creating deepfake nudes for bullying or extortion). The EU, UK, and Australia have enacted laws specifically criminalizing AI-generated CSAM, regardless of whether real children are depicted. In 2025, Resecurity collected numerous indicators that criminals are already using AI, and it is expected that new types of high-technology crimes leveraging AI will emerge in 2026.
New Security Challenge
Bad actors already abuse AI systems through specially crafted prompts or adversarial suffixes to bypass built-in safety protocols, causing models to generate prohibited content. Tools like DreamBooth and LoRA enable offenders to adapt open-source LLMs to generate targeted CSAM. This issue also creates new business models for criminals, enabling them to optimize costs and develop an underground economy leveraging synthetic content of illegal nature at scale.
Resecurity forecasts that bad actors will actively manipulate datasets - such as contaminated training data like LAION-5B, which may contain CSAM or a mix of benign and adult content - allowing models to learn and reproduce illegal outputs. Offenders can run models on their own infrastructure or host them on the Dark Web, producing unlimited illegal content that online platforms cannot detect. They can also give others access to the same services to create their own illicit and unlawful content. Open-source models are especially vulnerable because safety filters can be removed or bypassed. These models are being fine-tuned and jailbroken to generate illegal content.
Resecurity believes that the Internet community will face ominous security challenges enabled by AI in 2026, when, in addition to human actors, criminal and weaponized AI will transform traditional threats and create new risks targeting our society at a pace never seen before. Cybersecurity and law enforcement professionals should be concerned about the emergence of such dangerous precursors and prepared to continue the fight against the "machine" - in the fifth domain of warfare: cyber.