OpenAI Signals Shared AI Safety Boundaries
OpenAI CEO Sam Altman told employees Thursday evening that he hopes to help reduce tensions between rival Anthropic and the U.S. Department of Defense.
In an internal memo reviewed by CNBC, Altman reiterated that OpenAI opposes the use of artificial intelligence for mass surveillance or fully autonomous lethal weapons. He emphasized that humans must remain involved in high-stakes automated decisions, describing these limits as the company’s central principles.
The memo followed mounting friction between Anthropic and the Pentagon over the permissible use of advanced AI systems in military settings.
Anthropic Deadline and Pentagon Dispute
Anthropic faces a deadline of 5:01 p.m. ET Friday to determine whether it will allow the Defense Department unrestricted lawful use of its AI models.
The company has sought guarantees that its technology will not be deployed for fully autonomous weapons or broad domestic surveillance. The Defense Department has not agreed to those conditions.
Before Altman’s memo circulated, some OpenAI employees publicly expressed support for Anthropic’s position. Roughly 70 current staff members signed an open letter calling for solidarity amid pressure from the Defense Department.
Existing Contracts and Potential Expansion
OpenAI itself secured a $200 million contract with the Pentagon last year, enabling deployment of its models in nonclassified use cases. Anthropic was the first AI lab to integrate its systems into classified mission workflows.
Altman indicated that OpenAI may explore expanded classified deployment options, provided any agreement aligns with its ethical framework. He suggested the company could implement additional safeguards and dedicate personnel to oversee compliance.
He wrote that OpenAI would seek contract terms permitting lawful uses but excluding domestic surveillance and autonomous offensive weapons.
Uncertain Outcome Amid Industry Scrutiny
Altman said OpenAI has been discussing the situation internally and will continue deliberations with its safety teams. He acknowledged that any decision could carry reputational risks in the short term.
The episode highlights broader tensions within the artificial intelligence sector over military partnerships, national security priorities and the ethical deployment of advanced AI models.