OpenAI has shut down several China-based ChatGPT accounts after uncovering evidence of “authoritarian abuses” involving its technology.
The company said on Wednesday that the banned users appeared to be linked to Chinese government entities and had violated policies related to national security.
OpenAI’s latest 37-page threat report revealed that some users instructed ChatGPT to help design large-scale systems for monitoring social media activity. While the company noted that these efforts appeared to be driven by individuals rather than institutions, the disclosure highlights the potential for AI to be misused for political or surveillance purposes.
The report shows that even widely available AI tools can be repurposed to track, analyze, and potentially influence online activity, raising questions about privacy, ethics, and regulation. OpenAI’s findings offer one of the clearest public glimpses into the ways AI could support monitoring at scale.
Additionally, OpenAI found that a cluster of Chinese-language accounts had used ChatGPT to support cyber operations targeting Taiwan’s semiconductor industry, American universities, and political groups critical of the Chinese Communist Party. In several cases, users employed the chatbot to craft professional-sounding phishing emails in English designed to breach IT networks.
ChatGPT is not officially available in China because of the country’s strict internet controls, known as the Great Firewall. However, users can still access Chinese-language versions through virtual private networks, or VPNs. OpenAI said its recent disruptions highlighted how AI tools can still reach restricted regions despite government censorship.
Furthermore, the company reported that Russian- and Korean-speaking users had conducted similar cyber operations. These incidents do not show connection to government entities, although some may have ties to state-backed criminal groups.
Read more: Qualcomm acquires Arduino to strengthen robotics ties
Read more: Humanoid robots smash each other’s circuits at inaugural Beijing contest
Artificial intelligence poses serious risks
OpenAI said it has disrupted more than 40 malicious networks since it began publishing public threat reports in February 2024. The company added that none of its newest AI models had shown evidence of new offensive capabilities.
The report shows the growing global concern over the misuse of artificial intelligence in cyber warfare and surveillance. OpenAI said it remains committed to monitoring its technology and collaborating with partners to prevent hostile or unethical applications of its models.
Artificial intelligence in the wrong hands poses serious risks to security, democracy, and individual freedom. Experts warn that malicious actors can exploit AI for surveillance, disinformation, and cyberwarfare, often at a scale far beyond human capability.
Authoritarian governments can use AI to monitor citizens, predict dissent, and suppress opposition. These tools, powered by facial recognition and data analytics, could erode privacy and undermine free expression.
The Carnegie Endowment for International Peace notes that AI also enables mass disinformation campaigns. Automated bots and generative models can also produce persuasive, human-like content that floods social media and distorts public opinion. This technology has the potential to influence elections and deepen political polarization.
In cybersecurity, OpenAI put out a report that stated AI lowered the barriers for phishing, social engineering, and malware creation. Furthermore, attackers can generate realistic phishing emails or automate hacking operations, making cybercrime more efficient and harder to detect.
Finally, the European Parliament has warned that misuse of AI could worsen inequality and bias, reinforcing surveillance against marginalized communities. Experts at OpenAI have urged stronger global safeguards, transparency, and human-in-the-loop requirements to reduce the risk of AI abuse and ensure responsible deployment.
.
joseph@mugglehead.com
