When artificial intelligence (AI) company Anthropic released its latest
Threat Intelligence Report, it carried a chilling message: cybercriminals are no longer just asking AI tools for help—they are now using them as full-fledged partners in crime.
The report, which looked at the misuse of Anthropic's own Claude models, highlights cases such as North Korean operatives securing fake jobs in Fortune 500 companies, hackers running large-scale extortion schemes with AI and even the sale of AI-generated ransomware kits to other criminals.
What makes this alarming is the speed of change. Crimes that once required years of expertise and training can now be carried out by people with little background or technical knowledge, thanks to AI. In short, artificial intelligence has lowered the entry barrier for advanced cybercrime, making the threat more widespread and dangerous.
New Era of AI-enabled Cybercrime
Claude is a family of large language models (LLMs) developed by Anthropic, a public benefit company focused on building advanced AI responsibly for the long-term good of society.
Anthropic’s investigation shows that AI is no longer just a passive tool used by criminals to write malicious code or draft phishing emails. It has moved into a new phase, where models can carry out multi-step tasks, adjust to security defences in real time and even automate complex decisions.
The report highlights three key examples that reveal how deeply cybercriminals have begun to embed AI into their operations.
1. AI-driven Extortion OperationsOne of the most alarming cases uncovered by Anthropic involved a cybercriminal who used Claude code to run a large-scale data theft and extortion scheme. At least 17 organisations were targeted, including healthcare-providers, emergency services, government offices and even religious institutions.
According to Anthropic, the attacker set up a CLAUDE.md file to guide the model with their preferred methods. While this file acted as a reference, Claude code still made both tactical and strategic decisions on its own—such as how to break into networks, which data to steal and how to draft threatening messages.
Unlike traditional ransomware attacks, which lock files until a ransom is paid, this criminal stole sensitive data and then threatened to publish it unless the victims handed over as much as US$500,000.
AI played a role at every stage of the crime: scanning systems, stealing login credentials, infiltrating networks, selecting valuable files and even writing customised ransom notes designed to create maximum psychological pressure.
In short, Claude acted like a criminal lieutenant—handling Both the technical work and the strategy behind the extortion.
2. North Korea’s AI-powered Employment Fraud
Another case revealed how North Korean IT workers used Claude to fraudulently secure remote jobs at major US tech companies. With AI support, they created convincing professional profiles, passed technical interviews and even completed coding tasks once hired.
“Most concerning is the actors’ apparent dependency on AI — they appear unable to perform basic technical tasks or professional communication without AI assistance, using this capability to infiltrate high-paying engineering roles that are intended to fund North Korea’s weapons programmes,” the report notes.
This type of scam is not new but, in the past, it demanded years of training and strong English-language skills. Now, with AI, even operatives with limited expertise can get past these barriers. The salaries they earn are channelled back to the regime, directly undermining international sanctions.
For companies, the risk is serious. AI makes insider threats far harder to spot, because fraudulent employees can look credible not only on paper but also in their day-to-day work.
3. Ransomware-as-a-service (RaaS), No Coding Required!
One of the clearest examples of how AI is lowering the barrier to cybercrime involves a case where a criminal used Claude to build several ransomware variants. These came with features such as encryption, anti-recovery tools and stealth techniques to avoid detection.
The attacker then sold these ransomware kits online for between US$400 and US$1,200 each, effectively turning advanced malware into an off-the-shelf product. In reality, the person had little technical skill—without AI, he would not have been able to write even basic code.
“Most concerning is the actor’s apparent dependency on AI — he appears unable to implement complex technical components or troubleshoot issues without AI assistance, yet was selling capable malware,” Anthropic warns.
This marks a turning point: AI-generated malware marketplaces are no longer theory, but an active reality.
Why This Matters
Taken together, Anthropic’s findings reveal a worrying trend that is not limited to Claude but applies to other advanced AI models as well:
• AI has been weaponised. These models are no longer just tools; they are active particiquestion: How can governments, businesses and individualsok years of expertise can now be attempted by anyone with a laptop and an internet connection.
• AI is present at every stage. From picking targets to running scams to moving stolen money, AI can be built into every step of the criminal process.
This raises a serious question: how can governments, businesses, and individuals defend themselves when AI-powered attackers can grow and adapt faster than the systems built to stop them?
Anthropic’s Response
Anthropic says it is working to stay ahead of misuse by banning abusive accounts, strengthening its detection systems, and sharing intelligence with law enforcement agencies. The company maintains that it is committed to preventing abuse, but admits that criminals are constantly finding new ways to bypass safeguards.
This highlights the central challenge of AI safety: security features can reduce risk, but they cannot remove it entirely. As long as powerful AI models exist, some will inevitably be misused.
What You Can Do
Anthropic’s report may seem focused on hackers, big companies, and foreign governments, but the risks ultimately affect everyday internet users. AI-powered scams are becoming more precise, more convincing, and harder to spot.
Here are some practical steps you can take to protect yourself:
1. Strengthen your digital hygiene
• Use strong, unique passwords for every account. A trusted password manager can help.
• Turn on multi-factor authentication (MFA) wherever possible. Even if a password is stolen, MFA adds a second layer of protection.
• Keep your software and devices updated. Many attacks exploit old, unpatched systems.
2. Be wary of ‘too good to be true’ offers
• Scams involving fake remote jobs show how convincing false identities can look.
• Job seekers should be careful of offers that skip formal hiring steps or promise unusually high pay.
• Employers should verify identities thoroughly during recruitment.
3. Treat unexpected messages with caution
• Phishing emails, suspicious SMS links, WhatsApp forwards, and Telegram messages may now be written by AI, making them look more polished.
4. Watch out for red flags such as:
• Urgent requests for money or personal details.
• Attachments or links from unknown senders.
• Inconsistencies in tone, timing, or formatting.
5. Back up your data
• Regular backups are the best defence against ransomware.
• Store copies of important files offline or in secure cloud storage.
• Test your backups to make sure they work when needed.
6. Monitor your finances closely
• AI-driven fraud often involves stolen credit cards or misuse of identity.
• Check your bank statements regularly.
• Make sure you get SMS or email alerts for all transactions linked to your accounts.
7. Demand accountability from platforms
• Push AI developers, telecom providers, and financial institutions to strengthen fraud prevention.
• The same technology that enables scams should also be used to stop them—through fraud detection, stronger verification, and transparency in AI use.
8. Stay informed
• Remember, cybercrime is an ever-evolving field, where tactics change quickly.
• Follow updates from trusted sources such as CERT-In (India), the FBI (US), or recognised cybersecurity organisations to stay prepared to tackle new-age and evolving challenges.
Interestingly, the rise of AI-powered cybercrime highlights a stark truth: the same tools built to boost productivity, progress and security can also be turned into weapons. Criminals no longer need to be expert coders; they only need to know the right prompts to give an AI system.
For governments, this raises urgent policy questions. How should AI misuse be tracked? Should access to powerful models require stricter identity checks? And can international cooperation keep up with the speed of AI-driven fraud?
For companies, the message is equally clear: security teams must prepare for attacks that scale like software. Traditional defences will not be enough if criminals can adapt and evolve in real time with AI at their side.
For ordinary people like you and me, the threat is personal. The line between big corporate hacks and everyday fraud is blurring fast. The same AI that can design ransomware for sale can also write the perfect WhatsApp scam targeting a family member.
Anthropic’s report is more than a technical study—it is a warning about the next phase of cybercrime. From AI-driven extortion and fake job scams to 'do-it-yourself' (DIY) ransomware kits, artificial intelligence is now part of the criminal ecosystem. And as Anthropic notes, these risks extend across all advanced AI models, not just Claude.
The fight against cybercrime is entering a new era—one where defences must be as fast and intelligent as the attacks themselves.
For individuals, the best protection lies in vigilance, scepticism and good digital habits. For society, the challenge is to ensure that safeguards, regulations and public awareness keep pace with AI’s rapid growth.
The 'corridor of prosperity' promised by AI could just as easily become a highway for cybercrime—unless we act decisively now.
Stay Alert, Stay Safe!