OpenAI has signed an agreement with the U.S. Department of War after rival Anthropic walked away from similar talks over safety concerns, triggering a wave of backlash from ChatGPT users who say the company has crossed an ethical line.
Anthropic had been negotiating to provide its Claude AI models to the U.S. military. However, the company refused to accept terms that might permit mass surveillance or fully autonomous weapons. It wanted explicit safeguards written into the contract. The Department of War declined to adopt those restrictions.
Consequently, the talks collapsed.
Meanwhile, OpenAI moved forward on Monday and announced its own agreement with the military. The company said it will supply its AI models for use on classified systems. Additionally, OpenAI said the contract includes firm guardrails around surveillance and weapons.
OpenAI stated that it prohibits mass domestic surveillance. It also barred the use of its models to directly control autonomous weapons. Furthermore, the company said it will not allow high-stakes automated social control systems.
Executives argued that they embedded those limits in both legal language and technical controls. They said the agreement relies on cloud-based deployment and internal oversight. In addition, OpenAI said cleared staff will monitor compliance.
However, many users remain unconvinced.
A growing number of ChatGPT subscribers have announced cancellations across social media platforms and Reddit. Some users posted detailed guides explaining how to export data and close accounts. Others accused OpenAI of abandoning its ethical stance for government contracts.
Read more: Investment firm thinks TOTO is a highly undervalued AI player
Read more: Google DeepMind CEO warns of peril should we lose control of AI systems
US government will remove Claude
Meanwhile, Anthropic’s Claude climbed to the top position on the Apple App Store charts. Apple Inc. (NASDAQ: AAPL) operates the App Store. The surge in downloads fueled speculation that users are actively switching platforms.
Tech investor Aidan Gold criticized OpenAI on X. He pointed out that OpenAI previously supported Anthropic’s push for strict safety language. However, OpenAI later signed its own deal with the Department of War.
That sequence intensified skepticism.
The U.S. government has since indicated it will remove Claude from federal departments. Officials reportedly labeled Anthropic a supply chain risk after the breakdown in negotiations. Consequently, the split widened the divide between the companies.
OpenAI chief executive Sam Altman acknowledged that the agreement came together quickly. He conceded that the timing created difficult optics. However, he argued that engagement with government can shape responsible AI deployment.
Altman maintained that OpenAI did not dilute its safety standards. Instead, he said the company structured the deal to enforce internal red lines. Furthermore, OpenAI insisted that the phrase “all lawful purposes” does not override those limits.
Critics dispute that interpretation.
Some researchers argue that “lawful” can carry broad meaning in national security contexts. They worry that evolving definitions could stretch the agreement over time. Additionally, they question how external observers will verify compliance.
Anthropic took a different approach during its negotiations. The company sought narrow, written constraints on surveillance and weapons use. However, officials reportedly resisted those additions.
As a result, Anthropic chose to walk away.
The dispute reflects a deeper tension inside the AI industry. Companies want access to lucrative government contracts. However, they also face pressure from employees and users who fear military misuse.
Read more: Is artificial intelligence spending really helping U.S. economic growth?
Read more: Micron Technology attracts investor attention amid the AI revolution
Government will AI regardless
The ethics debate around artificial intelligence already runs deep. Many critics argue that AI systems rely on vast amounts of copyrighted material. Others warn about job displacement and automation. Additionally, environmental advocates cite the heavy energy demands of large data centers.
Against that backdrop, military applications add another layer of controversy.
Supporters of OpenAI’s decision argue that the government will use AI regardless. They contend that cooperation allows companies to impose safeguards from inside the system. Furthermore, they claim that refusing engagement could leave fewer protections in place.
Opponents counter that military contracts inevitably expand in scope. They argue that early compromises create long-term precedents. Consequently, they believe companies must draw firm lines before deployment.
OpenAI said its guardrails exceed those proposed in Anthropic’s rejected deal. However, the company has not released the full contract publicly. That absence fuels ongoing suspicion.
Meanwhile, online discussion continues to intensify.
Reddit threads debate whether ethical AI can coexist with military partnerships. Some users frame the issue as a betrayal of founding principles. Others argue that national security work does not automatically equal wrongdoing.
The controversy also raises commercial questions. Subscription cancellations could affect revenue if the backlash persists. Additionally, competitors may capitalize on shifting public sentiment.
Anthropic has not indicated whether it will reenter negotiations under revised terms. However, its stance has earned praise among users who favor strict ethical boundaries.
For now, OpenAI continues to defend its decision. The company says it believes structured engagement offers the best path forward. Meanwhile, critics continue to scrutinize every clause and public statement as the debate unfolds in real time.
.