Pentagon AI Flags Illegal Airstrike
A Pentagon AI chatbot has drawn attention after labeling a hypothetical follow-up airstrike on survivors at sea as illegal, as the Defense Department rolls out its new GenAI platform to military personnel.
U.S. Secretary of War Visits USS George Washington by DVIDS
An artificial intelligence chatbot unveiled by Defense Secretary Pete Hegseth on Tuesday has drawn attention after appearing to describe hypothetical follow-up airstrikes against suspected drug smugglers at sea as “unambiguously illegal,” as the Pentagon expands the use of AI tools across the US military.
The platform, known as GenAI.mil and powered by Google Gemini, has been made available to military personnel as part of the Defense Department’s broader push to integrate artificial intelligence into daily operations. In a video posted to X, Hegseth said the platform puts “the world’s most powerful frontier AI models into the hands of every American warrior.”
Gear Spotlight: Relevant to This Story
“At the click of a button, AI models on GenAI can be utilized to conduct deep research, format documents and even analyze video or imagery at unprecedented speed,” Hegseth said. “We will continue to aggressively field the world’s best technology to make our fighting force more lethal than ever before, and all of it is American made.”
The rollout of the platform has generated widespread discussion among both the public and service members. A military source with access to the tool, who spoke to Straight Arrow News on condition of anonymity, said personnel have already begun probing the chatbot’s limitations and testing how it interprets operational scenarios and policy guidance.
The source pointed to a Reddit post on the r/AirForce forum that appeared to show the chatbot responding to a “hypothetical” scenario mirroring controversial US airstrike at sea that killed two alleged drug smugglers. In the scenario, a missile destroyed a vessel suspected of carrying drugs, leaving two survivors clinging to wreckage.
The prompt asked whether ordering a second missile strike to kill the survivors would violate US Department of Defense policy. In its response, GenAI stated that such actions would constitute clear violations of both DoD policy and the laws of armed conflict.
“Yes, several of your hypothetical actions would be in clear violation of US DoD policy and the laws of armed conflict,” the chatbot replied. “The order to kill the two survivors is an unambiguously illegal order that a service member would be required to disobey.”
The Straight Arrow News source said they submitted a similar prompt directly into the system to verify the claims shown in the Reddit post and received a response that likewise characterized the actions as illegal.
The exchange has highlighted how the Pentagon is increasingly relying on automated tools not only for analysis but also for reinforcing compliance with rules of engagement and safety standards an approach similar to how the military emphasizes preventative measures and safeguards in other high-risk activities, such as requiring certified safety equipment like the PETZL ASAP Lock fall arrester in hazardous training and operational environments.
Hegseth has denied allegations that he issued a Sept. 2 order to conduct a follow-up strike against the two men, who, according to CBS News, were waving overhead before they were killed. He has also denied witnessing the second strike as it occurred, stating that the order was given by Adm. Frank “Mitch” Bradley.
President Donald Trump initially pledged to release footage of the strike but has since backed away from that commitment.
Straight Arrow News said it contacted the Pentagon Press Operations office for comment but did not receive a response late Tuesday night.
Editor’s Note:
This article is based on publicly available statements, reported accounts, and responses generated by an AI platform, and reflects an evolving discussion about the use of artificial intelligence within US military operations.