A number of weeks in the past, the Pentagon requested Anthropic, the corporate behind AI assistant Claude, to change an present $200 million contract and take away two key guardrails. These have been prohibitions on utilizing its expertise for home mass surveillance and totally autonomous weapons. Anthropic refused and the contract went to OpenAI as a substitute.
This dispute has thrust a query most of us most likely hadn’t thought a lot about into the highlight: can AI really do these items? And in that case, how anxious ought to we be?
The quick reply, in response to the specialists I spoke to, is that this is not science-fiction. It is already right here. However the image is extra difficult — and in some methods extra troubling — than the killer robots we’re used to seeing on the large display.
Article continues beneath
Chances are you’ll like
Mass surveillance is already taking place
“Mass surveillance isn’t simply viable, it already occurs,” James Wilson, a World AI Ethicist and creator of Synthetic Negligence, tells me. “Applied sciences reminiscent of Palantir and CCTV have been making this doable for years. It’s simply all the way down to the person states as to whether or not they select to do it.”
The US authorities’s PRISM program — uncovered by Edward Snowden over a decade in the past — was an early instance of surveillance at an enormous scale.
“The advances in AI have merely made it simpler to do that at scale,” Wilson says, “and our more and more linked existence means there are such a lot of extra knowledge sources they’ll entry, with or with out individuals’s permission.”
The current controversy round Ring doorbell cameras and Flock licence plate readers being utilized by police after the Tremendous Bowl is simply the most recent instance.
This issues for extraordinary individuals, not simply political dissidents. Jeff Watkins, an AI advisor specializing in governance and safety tells me this kind of surveillance factors to a sample already seen within the UK.
“We have seen a number of current information articles round individuals being misidentified by grocery store facial recognition methods, with the longstanding concern that these misidentifications can disproportionately have an effect on ladies and ethnic minorities,” Watkins tells me.
The cumulative impact is a shift in how society works. “Being topic to the algorithmic use of surveillance applied sciences strikes the dial in the direction of a ‘suspicion by default’ society, the place harmless events, going about their on a regular basis lives, may have their rights trampled by AI classification,” Watkins says.
What to learn subsequent
Autonomous weapons are already right here
The identical is true of deadly autonomous weapons. “The primary recorded use was by Turkey towards a Libyan goal utilizing a Kargu drone in 2021,” says Wilson. Since then, the expertise has moved quick. “The advances in AI have meant that that is now doable at a a lot bigger swarm scale, in addition to being extremely low cost.”
However the core drawback right here is accuracy — and what inaccuracy means when the stakes are life and demise. “Laptop imaginative and prescient to facially acknowledge individuals is just 90% correct at one of the best of instances, and if the system makes use of generative AI, it’ll hallucinate, as a result of it’s a characteristic not a bug within the expertise,” Wilson says.
The Israeli Defence Pressure’s AI concentrating on program, Lavender, which was used to determine suspected Hamas members, has since been acknowledged to have been fallacious 10% of the time. Even one of the best giant language fashions nonetheless hallucinate at a charge of 5-10%, in response to Hugging Face’s Vectara benchmark leaderboard. That ten p.c should still sound small. However on the scale these methods function, it actually is not.
Chances are you’ll suppose that the reply is extra human oversight. However that is precisely what some navy purposes are designed to scale back. “Eradicating human-in-the-loop dedication of the goal is due to this fact an moral minefield,” Wilson says. “At a extra primary stage, eradicating human dedication from the kill chain is eradicating any type of human dignity.”
It additionally removes duty, Watkins says. “If no person is there to press the ‘hearth’ button, who may be held accountable within the case of a lack of life, justified or in any other case? AI is just not a authorized individual and can’t be held accountable itself.”
Ought to we be anxious about Terminators?
A robotic performing with Boston Dynamics CEO Robert Playter throughout a discuss “Redefining robotics with Boston Dynamics” throughout Internet Summit on November 12, 2025 in Lisbon, Portugal. (Picture credit score: Getty Photos/Horacio Villalobos )
For anybody who grew up watching the Terminator movies, current robotic movies from Boston Dynamics and Chinese language tech corporations like Xpeng most likely really feel uncomfortably acquainted.
However Wilson, who has frolicked with comparable fashions, urges perspective. “Regardless of all the fancy, and really choreographed, robotic movies that come out of China and the US — they don’t seem to be fairly there. They nonetheless want plenty of work to get them to a stage the place they may totally autonomously work together with our world.”
The extra urgent concern, he says, is not humanoid robots. “I’m extra anxious about swarms of drone autonomous weapons. This expertise is already there, and it’s low cost sufficient that it may be constructed immediately en masse, by actually anybody.”
However the broader warning comes from Watkins, and it extends properly past the navy context. “When organizations and governments hand off an excessive amount of decision-making to flawed and immature methods that aren’t totally understood or explainable, with out sturdy auditing, it could actually erode human rights and muddy the waters of accountability.”
The Anthropic standoff was much less about one firm’s contract and extra a couple of query we’re all going to must reply throughout the board: who decides how a lot we belief these methods — and who’s accountable after they’re fallacious?

