A heated debate is unfolding in Silicon Valley over whether artificial intelligence (AI) should have the power to make life-and-death decisions on the battlefield. Defense tech companies, government officials, and human rights advocates are weighing the ethical and strategic implications of AI-powered weapons, with opinions sharply divided.
Brandon Tseng, co-founder of Shield AI, firmly stated in September that the United States would never allow fully autonomous weapons—where AI decides whether to kill. “Congress doesn’t want that,” Tseng told TechCrunch. “No one wants that.” However, this view was quickly challenged.
Just days later, Anduril co-founder Palmer Luckey expressed a different perspective, showing openness to autonomous weapons during a talk at Pepperdine University. He raised concerns about the moral inconsistencies in current warfare technology, such as landmines. “Where’s the moral high ground in a landmine that can’t tell the difference between a school bus full of kids and a Russian tank?” Luckey asked, highlighting the limitations of traditional weaponry compared to AI’s potential.
Luckey’s comments fueled the debate, but a spokesperson for Anduril clarified that he didn’t suggest robots should independently decide to kill. Instead, he was concerned about “bad people using bad AI.” This statement highlights the complex ethical issues surrounding autonomous weapons systems.
Differing Views on AI and Lethal Weapons
Trae Stephens, another co-founder of Anduril, echoed a more cautious stance last year. “The technologies we’re building are making it possible for humans to make the right decisions,” he explained. Stephens emphasized that there should always be “an accountable, responsible party in the loop for all decisions that could involve lethality.” However, this doesn’t necessarily mean a human must always pull the trigger.
The U.S. government’s position on the matter is also ambiguous. While the military doesn’t purchase fully autonomous weapons, it hasn’t banned their development or sale. Some weapons, like landmines and missiles, already operate autonomously, but these are considered different from systems that can identify and attack targets without human intervention.
In 2023, the U.S. updated its guidelines for AI safety in the military, requiring top officials to approve new autonomous weapons. Yet, these guidelines are voluntary, and officials have repeatedly stated that it’s “not the right time” to impose a binding ban on autonomous lethal technology.
A Growing Divide in Silicon Valley
The tech community remains divided on how much autonomy AI weapons should have. Palantir co-founder and Anduril investor Joe Lonsdale argued against oversimplifying the issue, saying it shouldn’t be a yes-or-no question. At an event hosted by the Hudson Institute, Lonsdale explained that while policymakers might want clear rules, the reality of warfare often requires flexibility.
“Imagine China embraces AI weapons while the U.S. still has to press the button every time a weapon fires,” Lonsdale said. He stressed the need for policymakers to understand the complexity of the issue before making decisions. “We could destroy ourselves in battle if we put a ‘stupid top-down rule’ without understanding the nuances.”
Lonsdale also clarified that tech companies like Palantir and Anduril don’t want to set policy. “That’s the job of elected officials,” he said, emphasizing the importance of educating policymakers on the potential of AI in warfare.
Ethical Concerns and International Resistance
Human rights groups have long campaigned for international bans on fully autonomous weapons, but the U.S. has resisted signing such agreements. Some believe the war in Ukraine has shifted the conversation, providing a real-world testing ground for defense tech companies. Ukrainian officials, seeking an edge over Russia, have called for greater automation in their weapons. “We need maximum automation,” said Mykhailo Fedorov, Ukraine’s minister of digital transformation, in an interview with *The New York Times*. “These technologies are fundamental to our victory.”
The concern that adversaries like China and Russia might deploy fully autonomous weapons before the U.S. is driving much of the debate. During a United Nations discussion on AI arms, a Russian diplomat was cryptic, saying, “We understand that for many delegations the priority is human control. For the Russian Federation, the priorities are somewhat different.”
Lobbying Efforts and Future Implications
As the debate continues, companies like Anduril and Palantir are increasing their lobbying efforts to ensure their voices are heard. According to OpenSecrets, the two companies have spent over $4 million in lobbying this year, urging Congress to consider the benefits of AI in military applications. Lonsdale and Luckey believe that AI could give the U.S. an advantage over rivals like China, but many questions remain about how to ensure responsible use.
While the idea of AI making battlefield decisions is still controversial, the pressure to integrate more automation into military systems is growing. Whether policymakers decide to embrace fully autonomous weapons or maintain human oversight, the debate in Silicon Valley is far from over, and the outcome could shape the future of warfare.