In a dramatic reshaping of the AI landscape, the recent clash between artificial intelligence companies and the U.S. Department of War has culminated in a consumer revolt. Over the weekend, OpenAI found itself on the defensive after stepping in to fill a contract void left by rival Anthropic, only to be met with a staggering 295% surge in ChatGPT app uninstalls and a full-blown “Cancel ChatGPT” movement .
The controversy began when the Pentagon, under the leadership of Defense Secretary Pete Hegseth, issued an ultimatum to four major AI labs—OpenAI, Google, xAI, and Anthropic. The demand was for unrestricted access to their AI models for “all lawful purposes,” a clause specifically designed to allow for sensitive military operations, including weapons development and intelligence gathering .
While the other three tech giants acquiesced, Anthropic—the maker of the Claude AI assistant—refused to cross two specific red lines: it would not allow its technology to be used for mass domestic surveillance or the development of fully autonomous weapons systems .
“Non-Negotiable for Democratic Societies”
The standoff reached a boiling point following a U.S. military operation in Venezuela. Reports emerged that Claude had been used via defense-tech firm Palantir during the extraction of Venezuelan President Nicolás Maduro . Although Anthropic reportedly questioned the use, it held firm to its ethical guidelines.
Pentagon officials were furious. Hegseth accused the company of “arrogance and betrayal,” and former President Donald Trump ordered federal agencies to cease using Claude, labeling it a “Radical Left AI company” . With the threat of being labeled a “supply chain risk”—a designation usually reserved for foreign adversaries like Huawei—Anthropic risked losing a lucrative $200 million defense contract .
Dario Amodei, CEO of Anthropic, defended the decision publicly, stating, “In some limited cases, we believe AI can undermine, rather than protect, democratic values,” and deemed the Pentagon’s demands as “non-negotiable” .
OpenAI Steps In—And Steps Into a Firestorm
Just hours after the split with Anthropic was confirmed, OpenAI CEO Sam Altman announced that his company had signed a deal with the Department of War to deploy its models into classified networks . To many users, the timing looked less like a business decision and more like a scab being hired during a moral strike.
The backlash was immediate and severe. Data from Sensor Tower indicated that uninstalls of the ChatGPT mobile app skyrocketed by 295% on Saturday, February 28 . Simultaneously, downloads of Anthropic’s Claude surged by 51%, propelling it to the top of Apple’s App Store charts and dethroning ChatGPT as the most downloaded free app in the U.S. .
A movement calling itself “QuitGPT” claimed that over 1.5 million users had taken action, ranging from canceling subscriptions to signing pledges on quitgpt.org . The campaign accused OpenAI of putting profit ahead of public safety, specifically citing fears that the deal paved the way for surveillance programs under the legal ambiguities of laws like the Patriot Act .
Altman’s “Sloppy” Retreat
Facing an exodus of users and a public relations nightmare, Sam Altman took to X (formerly Twitter) to conduct damage control. In a remarkable admission, he called the deal’s rollout “opportunistic and sloppy.”
“We shouldn’t have rushed to get this out on Friday,” Altman wrote. “The issues are super complex, and demand clear communication” .
Altman attempted to reassure the public by sharing extracts of the amended contract, claiming new language prohibited the “intentional” use of AI for domestic surveillance of U.S. persons and maintained “human responsibility for the use of force” . However, critics were quick to point out the loopholes. As noted by tech analysts, the reliance on words like “intentionally” and the broad interpretation of “legality” left the door open for autonomous systems where human intent is not a factor in the execution of a kill chain .
Internal Rebellion and Industry Fallout
The controversy is not just external. Internal divisions have become public, with OpenAI research scientist Aiden McLaughlin posting on X, “I personally don’t think this deal was worth it.” He noted that internal discussions had been “overwhelming” .
Adding to the pressure, an open letter signed by around 100 OpenAI employees and nearly 800 Google staff urged leadership to stand together against the Department of War’s demands .
Meanwhile, the legal and ethical debates rage on. Legal experts suggest that the Pentagon’s threat to use the Cold War-era Defense Production Act to force Anthropic’s hand would be unprecedented . As the industry watches, one thing is clear: the battle for the soul of AI—balancing national security against ethical safeguards—has moved from the boardroom to the pockets of everyday users, who are voting with their uninstall buttons.