Skip to main content

Rep. Sam Liccardo Forces Vote on Pentagon’s Misguided AI Posture

March 4, 2026

Silicon Valley’s Rep. Sam Liccardo introduced an amendment to the Defense Production Act prohibiting Department of Defense retaliation against tech developers.

WASHINGTON, D.C. – Today, Congressman Sam Liccardo (CA-16) forced a committee-wide vote on whether Congress will stop the Department of Defense from retaliating against developers for instituting safeguards on high-risk technologies. His action comes on the heels of a dispute between Anthropic and the Pentagon over the AI company’s effort to set guardrails preventing its chatbot, Claude, from being used for mass surveillance of U.S. citizens or autonomous weapons.

After Anthropic raised concerns about Claude’s potential use, Secretary Pete Hegseth threatened to invoke the Defense Production Act (DPA) to force the removal of those safeguards. Liccardo introduced an amendment during today’s hearing on the reauthorization of the Defense Production Act. 

Liccardo's amendment later failed on a party line vote.

Full Transcript: 

Thank you, Mr. Chair. I move to strike the last word.

I appreciate the very good bipartisan work that has resulted in crafting this reauthorization of the Defense Production Act. Like our national defense, AI safety should not be a partisan issue.

According to a recent Gallup survey, by a ratio of 8 to 1, U.S. adults believe government should maintain rules for AI safety, even if it means developing AI capabilities more slowly. That’s 79% of Republicans and 89% of Democrats who urge prioritizing AI safety over other goals.

“Agentic misalignment” is not yet a household term, but it soon will be. In Silicon Valley, where I live, it’s on the mind of every AI researcher and engineer with whom I speak, including those who work at the largest hyperscalers. Even the most optimistic among them warn of the potential misuse of AI to produce very dystopian outcomes.

A $380 billion hyperscaler, Anthropic, has warned the Pentagon and the public about the potential misuse of its product for mass surveillance of U.S. citizens and for autonomous killing machines that could exceed human constraint. They seek reasonable guardrails. They believe so strongly in those guardrails that they are willing to walk away from a lucrative government contract without them.

And full disclosure: I am a Claude subscriber, though I can’t claim to have used it to create any homicidal bots.

Regardless, when the company that designs and builds the jet fighter tells us when to use the brakes, we should listen. Instead, the Pentagon’s bureaucrats and lawyers believe they know better. They think they can fly the plane without brakes.

Instead of listening, they are threatening.

They told Anthropic that if it sought guardrails, they would blacklist the company as a supply chain threat, preventing any other government agency from buying its software. Ironically, the Pentagon also invoked the Defense Production Act and threatened to deploy Anthropic’s software without paying the company a dime for the next six months.

So let’s be clear: the people who built a very complex technological tool seek guardrails to protect the American public from its misuse. They are not simply being ignored by the government. The government has a right to ignore them. They are not simply being passed over for another company; the Pentagon certainly has a right to do that.

They are being punished for seeking guardrails.

The Pentagon’s response publicly has been: don’t worry your pretty little heads. When we deploy AI tools, we’ll follow the law.

There is only one problem with the Pentagon’s approach: there is no law. The law is years behind the technology.

The American public eagerly awaits this Congress or this administration to enact laws providing reasonable safeguards for AI use. But nobody should hold their breath. The same American public has waited 30 years for Congress to enact a simple data privacy statute that every industrialized nation on the planet has adopted.

The same American public has waited three decades for meaningful online protections for children, lacking even modest changes to Section 230 of the Communications Decency Act since 1996.

The only response from the majority in Congress or this administration has been to propose a moratorium on state laws that might provide AI guardrails—without any federal AI safety law to replace those state rules.

Nonetheless, the Pentagon persists in saying: don’t worry, we’ll follow the law.

To Secretary Hegseth, I say: please forgive the American public for demanding a more sober approach to AI.

Let’s also be clear about the power granted to the federal government by the Defense Production Act. It gives the Pentagon uniquely daunting authority to commandeer the private sector in service of national security. We should all be very wary of abuse of this expansive authority.

The Supreme Court expressed that wariness in 1952 when it struck down President Truman’s assertion of presidential authority to seize Youngstown steel mills during the Korean War.

This very committee has expressed its own wariness through changes to this reauthorization bill by narrowing the scope of Title I authorities to declared national emergencies. That is the right thing to do.

If we believe in capitalism, then we should agree that we must constrain the federal government’s potential to abuse its power to dictate prices, production, or the paths of supply chains.

But there is another concern. We cannot deploy and build AI in a climate of fear if it is to be trustworthy. We cannot have honest conversations about AI if people and companies are afraid to speak.

And we cannot protect the American public without doing so.

This amendment offers a very narrow, essential intervention.

 

Issues: Innovation