In late September, Brandon Tseng, co-founder of Shield AI, reassured the public that weapons used by the U.S. would never be fully autonomous, meaning an AI algorithm would never make the ultimate decision to kill. “Congress doesn’t want that,” Tseng said confidently. “No one wants that.”
But Tseng’s assertion may have been premature. Just five days later, Palmer Luckey, co-founder of Anduril, presented a more nuanced view on the topic. During a talk at Pepperdine University, Luckey expressed skepticism about the arguments against autonomous weapons. He challenged the typical rhetoric, saying, “Our adversaries use phrases that sound great in a soundbite, like ‘A robot should never decide who lives and dies.’ But where’s the moral high ground in a landmine that can’t distinguish between a school bus full of children and a Russian tank?”
While some interpreted Luckey’s comments as advocacy for AI-powered killing machines, his spokesperson, Shannon Prior, clarified that wasn’t the case. According to Prior, Luckey’s concern lies more with “bad people using bad AI,” rather than an endorsement of fully autonomous lethal systems.
Silicon Valley’s Ethical Tug-of-War
Historically, Silicon Valley’s tech elite have leaned toward caution when it comes to lethal autonomous systems. Take Luckey’s co-founder, Trae Stephens, for example. Last year, he remarked, “The technologies we’re building enable humans to make the right decisions. There’s always a responsible, accountable party in the loop for any lethal decision.” However, Anduril’s official stance appears more flexible: while a human should always be accountable, they may not always be required to make the final call in real time.
This ambiguity echoes the stance of the U.S. government. While the U.S. military doesn’t currently purchase fully autonomous lethal weapons, it also doesn’t explicitly ban their development or sale. This legal gray area leaves room for companies to innovate, while policymakers continue to debate the ethical and practical implications of removing human oversight from battlefield decisions.
Last year, the Department of Defense released updated guidelines for the integration of AI in military systems. These guidelines, while voluntary, have been adopted by many U.S. allies. They emphasize the need for top military officials to approve any use of autonomous weapons. Anduril, for its part, says it is committed to following these guidelines. But the conversation around the issue remains unresolved, with officials repeatedly stating that it’s “not the right time” to push for a binding international ban on autonomous lethal weapons.
Autonomous Weapons: A Strategic Necessity or a Moral Dilemma?
At a recent Hudson Institute event, Joe Lonsdale, co-founder of Palantir and investor in Anduril, pushed back against the binary framing of the debate. According to Lonsdale, the real question isn’t whether weapons should be fully autonomous or not, but rather how much autonomy should be allowed and under what conditions.
Lonsdale presented a hypothetical scenario: what if China fully embraces autonomous AI weapons while the U.S. insists that every shot fired by AI still requires human approval? The answer, he suggested, could spell disaster for the U.S. in future conflicts. “A simple, top-down rule could destroy us in battle,” Lonsdale warned, urging policymakers to adopt a more nuanced understanding of the issue.
Lonsdale was quick to emphasize that it’s not up to tech companies to set these policies. “Defense tech companies don’t make the rules, and we don’t want to make the rules,” he said. “That’s the job of elected officials. But those officials need to educate themselves on the nuances in order to make informed decisions.” He advocated for a flexible approach that adjusts the level of AI autonomy based on specific battlefield needs and conditions.
The War in Ukraine: A Testing Ground for Autonomous Weapons
The debate over AI weaponry has taken on new urgency as the war in Ukraine unfolds. For defense tech companies, the conflict offers real-world combat data and an opportunity to test new technologies. While AI systems are currently integrated into military operations, humans still retain the final decision-making power over life and death.
But that could change. Ukrainian officials, eager to gain an advantage over Russian forces, have pushed for greater automation in their weapons systems. “We need maximum automation,” Mykhailo Fedorov, Ukraine’s Minister of Digital Transformation, told The New York Times. “These technologies are fundamental to our victory.”
This sentiment is shared by many in Silicon Valley and Washington, D.C., where the fear of falling behind adversaries like Russia or China is palpable. At a United Nations debate on AI arms last year, a Russian diplomat hinted that while many countries prioritize human control over AI systems, Russia’s priorities may be “somewhat different.”
The concern is that if U.S. adversaries deploy fully autonomous weapons, the U.S. might have no choice but to follow suit, setting off a new arms race in AI warfare. As Lonsdale pointed out during his Hudson Institute appearance, the tech sector feels an urgent responsibility to educate U.S. military leaders, Congress, and the Department of Defense about the strategic potential of AI in warfare — and the risks of falling behind.
Lobbying for AI Autonomy
Silicon Valley’s defense tech companies aren’t just making their case in the media — they’re taking it to Congress. Anduril and Palantir have spent more than $4 million lobbying lawmakers this year alone, according to OpenSecrets. Their goal: to ensure the U.S. remains at the cutting edge of AI and defense technologies, while shaping the policies that govern their use.
For now, the U.S. government continues to walk a fine line between ethical concerns and strategic necessity. While activists and human rights groups have long pushed for an international ban on fully autonomous lethal weapons, the war in Ukraine has shifted the narrative. As more battlefield data comes in, and as countries like Ukraine call for greater automation, the pressure to revisit the autonomy debate is only growing.
At its core, the question isn’t just about technology — it’s about values. How much should we entrust AI with lethal decisions, and what role should humans play in life-or-death scenarios on the battlefield? As Silicon Valley and defense contractors continue to push for more AI integration, the answer may be more complex, and more consequential, than we ever imagined.