Pentagon May Have Accessed OpenAI Models via Microsoft Azure Before Military Policy Change

A report suggests the Pentagon may have experimented with OpenAI models through Microsoft Azure in 2023, before military-use restrictions were lifted.
A recent report suggests that the Pentagon may have begun experimenting with artificial intelligence systems developed by OpenAI as early as 2023 through Microsoft’s cloud platform, even before the company formally lifted its ban on military applications.
The development comes just days after OpenAI announced a new agreement to deploy its AI technologies for the US Department of Defense. The deal, confirmed by OpenAI CEO Sam Altman, sparked criticism online, with some users expressing concern over the use of advanced AI in military contexts and even abandoning ChatGPT in protest.
However, reporting by Wired indicates that US defense officials may have already been testing OpenAI-powered tools via Microsoft’s Azure infrastructure several years earlier.
Pentagon’s possible early access to OpenAI technology
According to the report, in 2023 the US Department of Defense accessed OpenAI’s models through the Azure OpenAI Service, a cloud-based offering provided by Microsoft. This reportedly happened while OpenAI’s own policies still prohibited the use of its technology for military purposes.
At that time, Microsoft held broad commercial licensing rights for OpenAI technologies and had longstanding contractual relationships with the US government, including defense agencies. This raised questions internally about whether the use of OpenAI models through Microsoft’s infrastructure technically fell within OpenAI’s restrictions.
The situation reportedly created confusion inside OpenAI. Employees even noticed Pentagon officials visiting the company’s San Francisco offices during that period, adding to speculation about the scope of collaboration.
Responding to the report, Microsoft spokesperson Frank Shaw told Wired, "Microsoft has a product called the Azure OpenAI Service that became available to the US Government in 2023 and is subject to Microsoft terms of service."
Microsoft declined to confirm whether the Pentagon specifically used the service but stated that the platform was only approved for “top secret” government workloads in 2025.
Controversy surrounding OpenAI’s defense partnership
The revelations arrive amid heightened scrutiny of OpenAI’s new partnership with the Pentagon. The company is expected to deploy its AI systems onto classified defense networks after a six-month transition period in which government systems will move away from technology developed by Anthropic.
Altman has attempted to reassure critics, saying the company will not allow its models to cross “red lines.” However, he also acknowledged internal limitations, telling employees they have “no say” in how the Pentagon ultimately uses the technology.
Anthropic reportedly stepped back from a potential defense deal due to concerns that its models could be applied to mass domestic surveillance or autonomous weapons programs—two uses Altman insists will not occur under OpenAI’s arrangement.
Policy shift opened door to national security use
OpenAI formally changed its stance in January 2024, removing its blanket prohibition on using AI systems for national security purposes. The policy shift surprised some employees, many of whom reportedly first learned about it through media coverage.
Executives later addressed the change during an internal all-hands meeting.
Later in 2024, OpenAI also entered into a collaboration with defense technology company Anduril to deploy AI tools for “national security missions.” Unlike the Pentagon deployment now under discussion, that project was not intended for use on classified government networks.
The emerging details about earlier Pentagon access highlight the growing and often complicated relationship between major AI developers, cloud providers, and national security agencies as governments increasingly turn to advanced AI capabilities.




