If there is one thing Trump actually understands, it’s the utility of the “firehose” championed by Steve Bannon–the tactic of spraying the country with so much excrement each day that the body politic misses behaviors that would, in ordinary times, be scandalous.
Permit me an example.
While we have been distracted by “little things” like an illegal war on Iran, Pam Bondi’s transparent efforts to keep the lid on Trump’s multiple and damning appearances in the Epstein files, and the re-emergence of measles thanks to RNK, Jr.’s war on medical science, the goofball who is currently in charge of the Pentagon has been in a standoff with Anthropic, a tech company opposed to unregulated and unethical use of its AI product, Claude.
As the Atlantic has reported, Secretary of Defense Pete Hegseth issued an ultimatum to Anthropic’s CEO, Dario Amodei. He ordered the company to strip the ethical guardrails from its AI models “or face the full weight of the state.” Hegseth accompanied that order with a threat that, unless Anthropic allowed the Pentagon “all lawful uses” of its Claude models, he would designate Anthropic “a supply-chain risk,” effectively blacklisting the company from doing business with “any entity that touches the Department of Defense.”
To his eternal credit, Amodei refused, explaining that while he believed “deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries,” there is a narrow set of cases in which AI can “undermine, rather than defend, democratic values.” He concluded that the Pentagon’s “threats do not change our position: we cannot in good conscience accede to their request.”
As the linked article argues, the company’s stance represents a principled objection to the use of its AI for mass surveillance.
It is not opposed to autonomous weapons per se and has already carved out exemptions for missile defense and cyber operations. The company’s hesitation regarding autonomy is technical: Large language models are simply not yet reliable enough to operate without a human in the loop. Pushing them too far, too quickly, invites a mistake that could prove disastrous. Anthropic is asking for an exclusion on autonomous weapons not out of an ideological refusal to fight, but to allow for the research and development necessary to make such systems safe.
People in the Trump administration, however, are impervious to both logic and ethics. Not long after the Atlantic published its article about the dispute, the Washington Post reported that Anthropic had been cut off from all government contracts. The Post reported that the action “shook the tech industry” and hardened the political and cultural battle lines across Silicon Valley over military use of artificial intelligence.
As the article noted, Trump has now put all of Silicon Valley on notice: if tech companies want to do business with the Pentagon they should be prepared to accede to any and all administration policies and hand over control of how their technology is used.
Less ethical rivals of Anthropic (including–surprise!– Elon Musk) have rushed in to pledge that their own companies would not question Pentagon policies, styling themselves as “loyal patriots.”
It isn’t surprising that Trump’s transactional administration would favor companies willing to trade their ethical concerns for lucrative contracts. Last fall, the administration characterized Anthropic’s ethical concerns as attempts to manipulate the government with “fear mongering” about AI technology. Media outlets reported that the White House was “displeased” when Anthropic raised ethical objections to the ways in which the administration wanted to use its technology–especially its intent to use the company’s product for surveillance.
The Atlantic article called this ethical quandary over domestic surveillance an “unbridgeable divide.”
Under an administration that invoked the Insurrection Act, or that sought to map domestic dissent, the Pentagon’s demand for “all lawful uses” of Anthropic’s models could become a skeleton key. Amodei articulated this danger in a recent interview with Ross Douthat, noting that, although it isn’t illegal to record conversations in public spaces, the sheer scale of AI changes the nature of the act. As Amodei put it, AI could transcribe speech and correlate it in a way that would not only identify one member of the opposition but “make a map of all 100 million. And so, are you going to make a mockery of the Fourth Amendment by the technology finding technical ways around it?”
The answer to that question is obvious. The fascist regime that currently controls America’s federal government–and the Silicon Valley “bros” who are rushing to ignore those pesky ethical concerns–will be happy to make a mockery of the Fourth Amendment.
Comments