The Digester

AI moves into everyday criminal workflows on underground forums

Feb 24th 2026

A study of forum conversations from January 1 to July 31, 2025 finds criminals experimenting with mainstream and bespoke AI to draft phishing, automate calls, and market fraud tools while debating risks and operational limits.

  • The study reviewed 163 threads from 21 forums with 2,264 messages by 1,661 contributors between January 1 and July 31, 2025.
  • Activity clustered on well known platforms such as XSS, BreachForums, Dread, and Exploit.in.
  • Four themes dominated: repurposing mainstream AI, marketing criminal AI products, adapting models for specific operations, and debating operational risk.
  • Mainstream chatbots were widely referenced with ChatGPT appearing in 52.5 percent of threads that mentioned legal AI products and DeepSeek, Claude, and Grok also common.
  • Criminal branded tools were advertised and reviewed, led in mentions by WormGPT, FraudGPT, and DarkGPT, often as wrappers reselling mainstream models.
  • Practical use cases focused on phishing, social engineering, spam variation, and call center automation, while malware development remained more technically challenging.
  • Users discussed jailbreaking, hosting local models, account abuse and logging risks, and researchers recommend monitoring sales listings and fraud signals to detect scaled AI adoption.