Connect with us

Hi, what are you looking for?

Friday, Aug 29, 2025
Mugglehead Investment Magazine
Alternative investment news based in Vancouver, B.C.
Hackers weaponize Anthropic's AI model 'Claude' for cybercrime spree
Hackers weaponize Anthropic's AI model 'Claude' for cybercrime spree
Image credit: Cybertex Institute of Technology

AI and Autonomy

Hackers weaponize Anthropic’s AI model ‘Claude’ for cybercrime spree

At least 17 organizations were targeted by the malicious attack

Renowned American AI researcher Anthropic has been the victim of an unprecedented cyberattack.

In a report released on Aug. 28, the company revealed that its Claude large language model bot was hijacked to target at least 17 organizations and steal their sensitive info.

These entities included healthcare operators, government institutions, religious organizations and emergency services. Hackers took aim at financial data and assets, personal records, and government credentials.

The unparalleled incident represents a major escalation in the weaponization of AI tools for pernicious purposes. It is the first time a prominent LLM has been used to carry out an attack of this scale.

“Rather than encrypt the stolen information with traditional ransomware, the actor threatened to expose the data publicly in order to attempt to extort victims into paying ransoms that sometimes exceeded US$500,000,” Anthropic specified.

Claude helped the hackers disguise dangerous files as legitimate Microsoft software through the use of custom-made tunnelling tools.

AI has lowered the barriers to sophisticated cybercrime, the renowned tech influence highlighted in its Threat Intelligence Report. Malicious hackers with minimal technical skills can now use AI to carry out complex schemes that would have required years of training in previous years.

“This probably explains why I’ve been seeing a lot of ‘checking for malicious code’ checks while developing in Claude Code lately,” commented Michael J. Silva, a New York cybersecurity expert. “Someone hacked the entire cybersecurity operation using AI!”

In response, Anthropic has implemented an array of new safeguards to try and prevent future alarming incidents like this. The AI developer notified authorities and worked with cybersecurity firms to learn from the occurrence.

Nonetheless, the report doesn’t confirm the exact extent of the impact on victims. It’s unclear who or what suffered permanent losses in the form of data or finances from the “GTG-2002” operation.

Read more: Did Anthropic just accidentally stumble on artificial general intelligence?

North Korean agents also took advantage of Claude

The artificial intelligence system was used by Kim Jong Un’s buddies to perpetrate employment fraud as well, the report explained.

“Our investigation revealed that North Korean operatives have been systematically leveraging Claude to secure and maintain fraudulent remote employment positions at technology companies,” Anthropic revealed in the Threat Intelligence write up.

They harnessed Claude to create fake employment profiles and apply for remote positions at Fortune 500 tech specialists. It enabled them to progress through technical interviews and provide satisfactory work to the employers, alarmingly.

“What we discovered was not merely another iteration of known IT worker schemes, but a transformation enabled by artificial intelligence that removes traditional operational constraints,” Claude’s creator added.

As North Korean workers are isolated for obvious reasons, the LLM enabled them to overcome obstacles to securing work in less secluded nations.

“Their new employer is then in breach of international sanctions by unwittingly paying a North Korean,” said BBC podcaster Geoff White.

North Korea is notorious for its hacking and crypto theft endeavours. A group of cybercriminals from the country stole over US$1.4 billion worth of Bitcoin earlier this year in a major heist.

They tend to operate under direction of the regime and the Reconnaissance General Bureau military intelligence agency. Key North Korean hacking organizations include the Lazarus Group, Kimsuky and Andariel.

Former OpenAI researchers founded Anthropic in 2021. Its flagship model, Claude, competes with the likes of ChatGPT, Grok and Gemini. The established tech operator has a valuation exceeding US$61 billion.

Read more: Anthropic and Silicon Valley venture capital firm join forces on AI startup fund

 

Follow Mugglehead on X

Like Mugglehead on Facebook

Follow Rowan Dunne on X

Follow Rowan Dunne on LinkedIn

rowan@mugglehead.com

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

You May Also Like

Medical and Pharmaceutical

OneBreath has enabled medical professionals to identify surgery patients at the highest risk of post-op pneumonia

AI and Autonomy

The partnership establishes a framework for joint research initiatives, real-world data collection, and clinical trial support

Sleep

The company's SleepScore app is one of the world's top sleep intelligence tools

Crypto/Blockchain

This safeguard would ensure compliance with federal directives and limit abuse