45 Comments

  1. Unless they can eliminate false positives, the answer has to be no. A killed person can’t be dug out of the spam folder.

    To put a stop to tech bros trying to implement dumb shit like this, we need a body of law that doesn’t pretend ~~than~~ that nobody is accountable as long as software does things. The implementers, the vendors, and the users all have to carry accountability. And it needs to be completely untouchable by any EULA.

    >In the past, Silicon Valley has erred on the side of caution. Take it from Luckey’s co-founder, Trae Stephens. “I think the technologies that we’re building are making it possible for humans to make the right decisions about these things,” he told Kara Swisher last year. “So that there is an accountable, responsible party in the loop for all decisions that could involve lethality, obviously.”

    That suggest dumping 100% of the accountability onto users. That isn’t compatible with hiding source code and algorithms as “trade secrets”. It prevents any user from ever being able to fully weigh the software’s contribution in the decision-making process. If they want trade secrets and profits, accountability has to go along with that in all cases.

  2. Admirable_Nothing on

    Then what happens when they dedice they want to kill the good guys rather than the bad guys. That is a thorny problem that needs to be solved. Read some Asimov he solved it nicely with the Three Laws of Robotics.

  3. Oh, so the deep thinkers in their ivory towers are debating this. Listen, asshats, the horse is already out of the barn because your tech bros overlords care about profit profit profit. Not lives, not morality, not the clap trap that you spew out to each other as you engage in intellectual masturbatory exercises.

    You wanna do something? Maybe stand up with some backbone to your managers and on up the chain and start making more noise and putting your money where your mouth is.

  4. currently…

    flight assist and evasive maneuvers? yeah

    kill? no

    the tech isn’t refined enough and they’d have to run endless tests to make sure it’s accurate.

    that guy has some interesting arguments and his vr kill mask art piece was pretty damn good conceptual art.

    his step bro is sus af though

  5. Gee whiz Beav, isn’t this the same sub that had an article about autonomous attack drones already doing “work” in Ukraine? A little company called Anduril out of SoCal?

    Looks like the debaters are pointlessly arguing for fun and for their own substantial paychecks, as that type is wont to do. What a fucking timeline.

  6. That decision will be made by the military because they and their contractors will develop their own AI systems, not Silicon Valley.

  7. Longjumping_Sock1797 on

    So is this how humanity is going to end? So many possibilities for us ending ourselves.

  8. CrzyWrldOfArthurRead on

    They already are, there are missiles and drones that use AI to determine where their target is even when gps denied. They look at heat signatures and other sensor data to get their bearing and head for the (hopefully) correct target. They make a decision when presented with multiple possibilities or if the target has moved.

    It’s not new technology at all.

  9. “Silicon Valley” is not a person or people

    Who is debating? Who specifically by name is talking about this in real conversations with other real people?

  10. happy_snowy_owl on

    This question isn’t actually what it appears.

    If a commander launches a guided missile, he does so knowing that there’s some non-zero chance that the guidance system fails and it will not hit its target. There’s also a non-zero chance that the targeting information is faulty. These may result in unintended collateral damage or casualties.

    This is no different than launching a weapons system governed by AI. The commander accepts the risk in terms of probability of success prior to launch, and determines whether it meets his minimum threshold. AI has the potential to reduce inaccuracies introduced by current kill chains.

    In either case, there’s still an accountable human at the end of the kill chain, which alleviates most people’s moral and ethical qualms about the whole thing.

  11. Ahh yes the military industrial complex is military industrial complexing. It’s only a matter of time before FAANG produces warfare hardware (they already do software)

  12. I mean this will lead to a society like Gundam Wing. We will unethically mentally torture 5 youths into horrific super soldier tactics, take away their humanity, to fight autonomous robots and restore the Sank Kingdom. The prophecy has been written.

  13. The fact that this is even being debated just shows how fucking cracked these tech billionaires are. The answer to this question, for the survival of humanity, should always and unequivocally be… FUCKING HELL NO!!!

    IDIOTS!!

    Where is there not a GLOBAL UNIVERSAL ARTIFICIAL INTELLIGENCE CODE OF ETHICS?

    These mother fuckers have lost their GAWD DAMNED MINDS!!

  14. Silicon Valley will say no, but does that stop enemies of the West from doing it?

    Unfortunately the cat is out of the bag, Silicon Valley don’t make AI murder drones, the military industrial complex does that.

  15. Isn’t that one of the plot lines in the classic sci-fi novel Don’t Create the Torment Nexus?

  16. What Silicon Valley thinks doesn’t fucking matter in the slightest. All it takes is one random person to give AI that capability, and then everyone will be doing it. Seems like it’s only a matter of time before it’s mainstream.

  17. “And my point to them is, where’s the moral high ground in a landmine that can’t tell the difference between a school bus full of kids and a Russian tank?”

    I’m sorry?! That’s your justification in paragraph 2!?

    So the answer is: no of course if you build autonomous kill bots that kill humans you are a monster. You should be executed and go straight to hell

    Now, autonomous bots that kill other robots? That’s fine. War is mostly about economy vs economy anyway

  18. It won’t be up to the techies. It will be up to the military.

    And once the military in one country concludes that autonomous weapons are the way to go because they can do more damage to the enemy faster — perhaps before the enemy has time to mount a response or even a defense — there will be little choice but for others to follow suit.

    Nuclear all over again. Powerful nation-states will have them, while agreeing not to use them, and we all hope that agreement doesn’t break down.

  19. I hope when they build the Torment Nexus from the hit sci-if novel *Don’t Build The Torment Nexus* they remember to paint flames on it. Flames make it go faster.

  20. Minister_for_Magic on

    It’s fucking pathetic that media can’t be bothered to directly call this out for what it is: Peter Thiel and his cabal of disciples all doing dystopian shit while besmirching the good names from Tolkien’s works.

    Once upon a time, VCs had a “no guns, no drugs” policy for investments. Now, Thiel has managed to repackage financing weapons manufacturers as “rebuilding American industry” and many big funds are embracing it.