By using this site, you agree to the Privacy Policy and Terms of Use.
Accept
Times CatalogTimes CatalogTimes Catalog
  • Home
  • Tech
    • Google
    • Microsoft
    • YouTube
    • Twitter
  • News
  • How To
  • Bookmarks
Search
Technology
  • Meta
Others
  • Apple
  • WhatsApp
  • Elon Musk
  • Threads
  • About
  • Contact
  • Privacy Policy and Disclaimer
© 2025 Times Catalog
Reading: Silicon Valley is debating if AI weapons should be allowed to decide to kill
Share
Notification
Font ResizerAa
Font ResizerAa
Times CatalogTimes Catalog
Search
  • News
  • How To
  • Tech
    • AI
    • Apple
    • Microsoft
    • Google
    • ChatGPT
    • Gemini
    • YouTube
    • Twitter
  • Coming Soon
Follow US
  • About
  • Contact
  • Privacy Policy and Disclaimer
© 2025 Times Catalog
Times Catalog > Blog > Tech > AI > Silicon Valley is debating if AI weapons should be allowed to decide to kill
AI

Silicon Valley is debating if AI weapons should be allowed to decide to kill

Debra Massey
Last updated: October 12, 2024 11:35 am
Debra Massey
Share
8 Min Read
Silicon Valley is debating if AI weapons should be allowed to decide to kill
SHARE

In late September, Brandon Tseng, co-founder of Shield AI, reassured the public that weapons used by the U.S. would never be fully autonomous, meaning an AI algorithm would never make the ultimate decision to kill. “Congress doesn’t want that,” Tseng said confidently. “No one wants that.”

Contents
Silicon Valley’s Ethical Tug-of-WarAutonomous Weapons: A Strategic Necessity or a Moral Dilemma?The War in Ukraine: A Testing Ground for Autonomous WeaponsLobbying for AI Autonomy

But Tseng’s assertion may have been premature. Just five days later, Palmer Luckey, co-founder of Anduril, presented a more nuanced view on the topic. During a talk at Pepperdine University, Luckey expressed skepticism about the arguments against autonomous weapons. He challenged the typical rhetoric, saying, “Our adversaries use phrases that sound great in a soundbite, like ‘A robot should never decide who lives and dies.’ But where’s the moral high ground in a landmine that can’t distinguish between a school bus full of children and a Russian tank?”

While some interpreted Luckey’s comments as advocacy for AI-powered killing machines, his spokesperson, Shannon Prior, clarified that wasn’t the case. According to Prior, Luckey’s concern lies more with “bad people using bad AI,” rather than an endorsement of fully autonomous lethal systems.

Silicon Valley’s Ethical Tug-of-War

Historically, Silicon Valley’s tech elite have leaned toward caution when it comes to lethal autonomous systems. Take Luckey’s co-founder, Trae Stephens, for example. Last year, he remarked, “The technologies we’re building enable humans to make the right decisions. There’s always a responsible, accountable party in the loop for any lethal decision.” However, Anduril’s official stance appears more flexible: while a human should always be accountable, they may not always be required to make the final call in real time.

This ambiguity echoes the stance of the U.S. government. While the U.S. military doesn’t currently purchase fully autonomous lethal weapons, it also doesn’t explicitly ban their development or sale. This legal gray area leaves room for companies to innovate, while policymakers continue to debate the ethical and practical implications of removing human oversight from battlefield decisions.

Last year, the Department of Defense released updated guidelines for the integration of AI in military systems. These guidelines, while voluntary, have been adopted by many U.S. allies. They emphasize the need for top military officials to approve any use of autonomous weapons. Anduril, for its part, says it is committed to following these guidelines. But the conversation around the issue remains unresolved, with officials repeatedly stating that it’s “not the right time” to push for a binding international ban on autonomous lethal weapons.

Autonomous Weapons: A Strategic Necessity or a Moral Dilemma?

At a recent Hudson Institute event, Joe Lonsdale, co-founder of Palantir and investor in Anduril, pushed back against the binary framing of the debate. According to Lonsdale, the real question isn’t whether weapons should be fully autonomous or not, but rather how much autonomy should be allowed and under what conditions.

Lonsdale presented a hypothetical scenario: what if China fully embraces autonomous AI weapons while the U.S. insists that every shot fired by AI still requires human approval? The answer, he suggested, could spell disaster for the U.S. in future conflicts. “A simple, top-down rule could destroy us in battle,” Lonsdale warned, urging policymakers to adopt a more nuanced understanding of the issue.

Lonsdale was quick to emphasize that it’s not up to tech companies to set these policies. “Defense tech companies don’t make the rules, and we don’t want to make the rules,” he said. “That’s the job of elected officials. But those officials need to educate themselves on the nuances in order to make informed decisions.” He advocated for a flexible approach that adjusts the level of AI autonomy based on specific battlefield needs and conditions.

The War in Ukraine: A Testing Ground for Autonomous Weapons

The debate over AI weaponry has taken on new urgency as the war in Ukraine unfolds. For defense tech companies, the conflict offers real-world combat data and an opportunity to test new technologies. While AI systems are currently integrated into military operations, humans still retain the final decision-making power over life and death.

But that could change. Ukrainian officials, eager to gain an advantage over Russian forces, have pushed for greater automation in their weapons systems. “We need maximum automation,” Mykhailo Fedorov, Ukraine’s Minister of Digital Transformation, told The New York Times. “These technologies are fundamental to our victory.”

This sentiment is shared by many in Silicon Valley and Washington, D.C., where the fear of falling behind adversaries like Russia or China is palpable. At a United Nations debate on AI arms last year, a Russian diplomat hinted that while many countries prioritize human control over AI systems, Russia’s priorities may be “somewhat different.”

The concern is that if U.S. adversaries deploy fully autonomous weapons, the U.S. might have no choice but to follow suit, setting off a new arms race in AI warfare. As Lonsdale pointed out during his Hudson Institute appearance, the tech sector feels an urgent responsibility to educate U.S. military leaders, Congress, and the Department of Defense about the strategic potential of AI in warfare — and the risks of falling behind.

Lobbying for AI Autonomy

Silicon Valley’s defense tech companies aren’t just making their case in the media — they’re taking it to Congress. Anduril and Palantir have spent more than $4 million lobbying lawmakers this year alone, according to OpenSecrets. Their goal: to ensure the U.S. remains at the cutting edge of AI and defense technologies, while shaping the policies that govern their use.

For now, the U.S. government continues to walk a fine line between ethical concerns and strategic necessity. While activists and human rights groups have long pushed for an international ban on fully autonomous lethal weapons, the war in Ukraine has shifted the narrative. As more battlefield data comes in, and as countries like Ukraine call for greater automation, the pressure to revisit the autonomy debate is only growing.

At its core, the question isn’t just about technology — it’s about values. How much should we entrust AI with lethal decisions, and what role should humans play in life-or-death scenarios on the battlefield? As Silicon Valley and defense contractors continue to push for more AI integration, the answer may be more complex, and more consequential, than we ever imagined.

You Might Also Like

ChatGPT search is growing quickly in Europe, OpenAI data suggests

Google is trying to get college students hooked on AI with a free year of Gemini Advanced

ChatGPT will now use its ‘memory’ to personalize web searches

ChatGPT is referring to users by their names unprompted, and some find it ‘creepy’

OpenAI’s new reasoning AI models hallucinate more

Share This Article
Facebook Twitter Pinterest Whatsapp Whatsapp Copy Link
What do you think?
Love0
Happy0
Sad0
Sleepy0
Angry0
Previous Article Researchers question AI’s ‘reasoning’ ability as models stumble on math problems with trivial changes Researchers question AI’s ‘reasoning’ ability as models stumble on math problems with trivial changes
Next Article The Internet Archive is still down but will return in ‘days, not weeks’ The Internet Archive is still down but will return in ‘days, not weeks’
Leave a comment Leave a comment

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Stay Connected

144FollowersLike
23FollowersFollow
237FollowersPin
19FollowersFollow

Latest News

Logitech’s MX Creative Console now supports Figma and Adobe Lightroom
Logitech’s MX Creative Console now supports Figma and Adobe Lightroom
Apps News Tech April 23, 2025
Samsung resumes its troubled One UI 7 rollout
Samsung resumes its troubled One UI 7 rollout
Google News Samsung Tech April 23, 2025
Google Messages starts rolling out sensitive content warnings for nude images
Google Messages starts rolling out sensitive content warnings for nude images
Apps News Tech April 22, 2025
Vivo wants its new smartphone to replace your camera
Vivo wants its new smartphone to replace your camera
News Tech April 22, 2025
Times CatalogTimes Catalog
Follow US
© 2025 Times Catalog
  • About
  • Contact
  • Privacy Policy and Disclaimer
Welcome Back!

Sign in to your account

Lost your password?