OpenAI Revises Pentagon AI Deal After Backlash and ChatGPT Uninstall Surge

OpenAI Revises Pentagon AI Deal After Backlash and ChatGPT Uninstall Surge
X

OpenAI updates Pentagon contract after backlash triggers 295% spike in ChatGPT uninstalls and rising support for rival Claude chatbot.

In a rare moment of public reflection, Sam Altman, CEO of OpenAI, has acknowledged that the company moved too quickly in securing a deal with the United States Department of Defense. The agreement was announced just hours after the US government terminated its contract with rival AI firm Anthropic, triggering criticism that OpenAI’s actions appeared opportunistic.

The controversy unfolded after US President Donald Trump ended Anthropic’s engagement with federal agencies. Reports suggested that Anthropic declined to remove certain AI safeguards, which led to its contract being scrapped. OpenAI swiftly stepped in, announcing a partnership with the Pentagon — a move that sparked backlash across social media platforms.

Responding to the criticism on X, Altman admitted the optics were far from ideal. “Good learning experience for me as we face higher-stakes decisions in the future,” he wrote, conceding that the situation looked “opportunistic and sloppy.” He clarified that OpenAI’s intention was to prevent further tension between the US defense establishment and the AI sector, rather than capitalise on a competitor’s exit.

However, public reaction was swift and measurable. According to Sensor Tower data, ChatGPT uninstalls in the United States surged by 295 per cent day-to-day on February 28. At the same time, downloads of Anthropic’s Claude chatbot climbed by 51 per cent, pushing Claude to the top position on the US Apple App Store rankings. Even pop star Katy Perry weighed in indirectly, sharing a screenshot of Claude marked with a heart emoji — widely interpreted as support for Anthropic’s stance.

Amid the backlash, OpenAI has since revised its agreement with the Pentagon. Altman shared details of an internal memo outlining clear restrictions on how its AI systems may be used. The updated contract explicitly prohibits mass domestic surveillance of US citizens, referencing constitutional and federal protections such as the Fourth Amendment, the National Security Act of 1947, and the FISA Act of 1978.

“The Department understands this limitation to prohibit deliberate tracking, surveillance, or monitoring of U.S. persons or nationals,” Altman stated, emphasising the company’s commitment to civil liberties. He further clarified that no intelligence agency — including the National Security Agency (NSA) — will be permitted to use OpenAI’s systems under the current agreement. “Any services to those agencies would require a follow-on modification to our contract,” he added.

While autonomous weapons were not specifically addressed, Altman acknowledged broader safety concerns. “There are many things the technology just isn’t ready for, and many areas we don’t yet understand the tradeoffs required for safety,” he explained.

In a strong closing remark, Altman declared that he would refuse any unconstitutional order from the Department of Defense regarding OpenAI’s AI systems — even if it meant facing imprisonment.

The episode underscores the delicate balance AI companies must strike between commercial growth, national security partnerships, and public trust in an increasingly scrutinised industry.

Next Story
Share it