Under no circumstance should they be allowed to make that determination
PilotKnob on
It’s covered in Asimov’s Three Laws, FFS.
muchaschicas on
Fucking tech-bros
GadreelsSword on
It’s okay, AI will only kill people with six fingers.
8-bit_Goat on
Do you want terminators? Because that’s how you get terminators.
barrystrawbridgess on
T-800. Come with me if you want to live.
iDontRememberCorn on
Yes, let’s let the sociopaths set the standard on what the machines can do.
HAHA_goats on
Unless they can eliminate false positives, the answer has to be no. A killed person can’t be dug out of the spam folder.
To put a stop to tech bros trying to implement dumb shit like this, we need a body of law that doesn’t pretend ~~than~~ that nobody is accountable as long as software does things. The implementers, the vendors, and the users all have to carry accountability. And it needs to be completely untouchable by any EULA.
>In the past, Silicon Valley has erred on the side of caution. Take it from Luckey’s co-founder, Trae Stephens. “I think the technologies that we’re building are making it possible for humans to make the right decisions about these things,” he told Kara Swisher last year. “So that there is an accountable, responsible party in the loop for all decisions that could involve lethality, obviously.”
That suggest dumping 100% of the accountability onto users. That isn’t compatible with hiding source code and algorithms as “trade secrets”. It prevents any user from ever being able to fully weigh the software’s contribution in the decision-making process. If they want trade secrets and profits, accountability has to go along with that in all cases.
Admirable_Nothing on
Then what happens when they dedice they want to kill the good guys rather than the bad guys. That is a thorny problem that needs to be solved. Read some Asimov he solved it nicely with the Three Laws of Robotics.
habu-sr71 on
Oh, so the deep thinkers in their ivory towers are debating this. Listen, asshats, the horse is already out of the barn because your tech bros overlords care about profit profit profit. Not lives, not morality, not the clap trap that you spew out to each other as you engage in intellectual masturbatory exercises.
You wanna do something? Maybe stand up with some backbone to your managers and on up the chain and start making more noise and putting your money where your mouth is.
n3w4cc01_1nt on
currently…
flight assist and evasive maneuvers? yeah
kill? no
the tech isn’t refined enough and they’d have to run endless tests to make sure it’s accurate.
that guy has some interesting arguments and his vr kill mask art piece was pretty damn good conceptual art.
his step bro is sus af though
nakabra on
The simple fact that they are discussing this “idea” just shows how fucked we are…
habu-sr71 on
Gee whiz Beav, isn’t this the same sub that had an article about autonomous attack drones already doing “work” in Ukraine? A little company called Anduril out of SoCal?
Looks like the debaters are pointlessly arguing for fun and for their own substantial paychecks, as that type is wont to do. What a fucking timeline.
icebeat on
And they are going to ask to this weird? Of course he is family of Matt gaetz
auburnradish on
That decision will be made by the military because they and their contractors will develop their own AI systems, not Silicon Valley.
dogfacedwereman on
cool. I am glad they are in charge of making that decision.
Longjumping_Sock1797 on
So is this how humanity is going to end? So many possibilities for us ending ourselves.
sdewitt108 on
Palmer Luckey is deciding our fate? We are fucked!
Nouseriously on
The rest of society here: we’re not letting you decide
r0n1n2021 on
How are they even involved?
ArnieCunninghaam on
I’m not listening to philosophical debates from anyone with a soul patch.
JaketheSnake319 on
Did we not learn anything from all the terminator and matrix movies?
sten45 on
Those sociopaths should not be the ones who make decisions about this.
CrzyWrldOfArthurRead on
They already are, there are missiles and drones that use AI to determine where their target is even when gps denied. They look at heat signatures and other sensor data to get their bearing and head for the (hopefully) correct target. They make a decision when presented with multiple possibilities or if the target has moved.
It’s not new technology at all.
entitysix on
Why is “Silicon Valley” debating anything?
SkeetySpeedy on
“Silicon Valley” is not a person or people
Who is debating? Who specifically by name is talking about this in real conversations with other real people?
happy_snowy_owl on
This question isn’t actually what it appears.
If a commander launches a guided missile, he does so knowing that there’s some non-zero chance that the guidance system fails and it will not hit its target. There’s also a non-zero chance that the targeting information is faulty. These may result in unintended collateral damage or casualties.
This is no different than launching a weapons system governed by AI. The commander accepts the risk in terms of probability of success prior to launch, and determines whether it meets his minimum threshold. AI has the potential to reduce inaccuracies introduced by current kill chains.
In either case, there’s still an accountable human at the end of the kill chain, which alleviates most people’s moral and ethical qualms about the whole thing.
AdminIsPassword on
Is it more profitable for AI to kill or not to kill?
One_Okra_2487 on
Ahh yes the military industrial complex is military industrial complexing. It’s only a matter of time before FAANG produces warfare hardware (they already do software)
danish_elite on
I mean this will lead to a society like Gundam Wing. We will unethically mentally torture 5 youths into horrific super soldier tactics, take away their humanity, to fight autonomous robots and restore the Sank Kingdom. The prophecy has been written.
SeriousMonkey2019 on
Short answer: no
Long answer: The decision shouldn’t be up to Silicon Valley
Jamizon1 on
The fact that this is even being debated just shows how fucking cracked these tech billionaires are. The answer to this question, for the survival of humanity, should always and unequivocally be… FUCKING HELL NO!!!
IDIOTS!!
Where is there not a GLOBAL UNIVERSAL ARTIFICIAL INTELLIGENCE CODE OF ETHICS?
These mother fuckers have lost their GAWD DAMNED MINDS!!
CrustyBappen on
Silicon Valley will say no, but does that stop enemies of the West from doing it?
Unfortunately the cat is out of the bag, Silicon Valley don’t make AI murder drones, the military industrial complex does that.
jmorley14 on
Isn’t that one of the plot lines in the classic sci-fi novel Don’t Create the Torment Nexus?
sleestakninja on
No. The answer is no. Could someone please tell them?
DillyDoobie on
What Silicon Valley thinks doesn’t fucking matter in the slightest. All it takes is one random person to give AI that capability, and then everyone will be doing it. Seems like it’s only a matter of time before it’s mainstream.
vomitHatSteve on
“And my point to them is, where’s the moral high ground in a landmine that can’t tell the difference between a school bus full of kids and a Russian tank?”
I’m sorry?! That’s your justification in paragraph 2!?
So the answer is: no of course if you build autonomous kill bots that kill humans you are a monster. You should be executed and go straight to hell
Now, autonomous bots that kill other robots? That’s fine. War is mostly about economy vs economy anyway
Coises on
It won’t be up to the techies. It will be up to the military.
And once the military in one country concludes that autonomous weapons are the way to go because they can do more damage to the enemy faster — perhaps before the enemy has time to mount a response or even a defense — there will be little choice but for others to follow suit.
Nuclear all over again. Powerful nation-states will have them, while agreeing not to use them, and we all hope that agreement doesn’t break down.
Ark_Legend on
We’re so fucked
neonsnakemoon on
We can watch ourselves great filter ourselves out
Lootboxboy on
I hope when they build the Torment Nexus from the hit sci-if novel *Don’t Build The Torment Nexus* they remember to paint flames on it. Flames make it go faster.
GroundbreakingGur930 on
SKYNET is the only one brave enough to ask the REAL questions.
gnapster on
I was hoping to die at a fun 100 years old before the Terminator came true. God damn.
Minister_for_Magic on
It’s fucking pathetic that media can’t be bothered to directly call this out for what it is: Peter Thiel and his cabal of disciples all doing dystopian shit while besmirching the good names from Tolkien’s works.
Once upon a time, VCs had a “no guns, no drugs” policy for investments. Now, Thiel has managed to repackage financing weapons manufacturers as “rebuilding American industry” and many big funds are embracing it.
45 Comments
Easy answer: No.
Under no circumstance should they be allowed to make that determination
It’s covered in Asimov’s Three Laws, FFS.
Fucking tech-bros
It’s okay, AI will only kill people with six fingers.
Do you want terminators? Because that’s how you get terminators.
T-800. Come with me if you want to live.
Yes, let’s let the sociopaths set the standard on what the machines can do.
Unless they can eliminate false positives, the answer has to be no. A killed person can’t be dug out of the spam folder.
To put a stop to tech bros trying to implement dumb shit like this, we need a body of law that doesn’t pretend ~~than~~ that nobody is accountable as long as software does things. The implementers, the vendors, and the users all have to carry accountability. And it needs to be completely untouchable by any EULA.
>In the past, Silicon Valley has erred on the side of caution. Take it from Luckey’s co-founder, Trae Stephens. “I think the technologies that we’re building are making it possible for humans to make the right decisions about these things,” he told Kara Swisher last year. “So that there is an accountable, responsible party in the loop for all decisions that could involve lethality, obviously.”
That suggest dumping 100% of the accountability onto users. That isn’t compatible with hiding source code and algorithms as “trade secrets”. It prevents any user from ever being able to fully weigh the software’s contribution in the decision-making process. If they want trade secrets and profits, accountability has to go along with that in all cases.
Then what happens when they dedice they want to kill the good guys rather than the bad guys. That is a thorny problem that needs to be solved. Read some Asimov he solved it nicely with the Three Laws of Robotics.
Oh, so the deep thinkers in their ivory towers are debating this. Listen, asshats, the horse is already out of the barn because your tech bros overlords care about profit profit profit. Not lives, not morality, not the clap trap that you spew out to each other as you engage in intellectual masturbatory exercises.
You wanna do something? Maybe stand up with some backbone to your managers and on up the chain and start making more noise and putting your money where your mouth is.
currently…
flight assist and evasive maneuvers? yeah
kill? no
the tech isn’t refined enough and they’d have to run endless tests to make sure it’s accurate.
that guy has some interesting arguments and his vr kill mask art piece was pretty damn good conceptual art.
his step bro is sus af though
The simple fact that they are discussing this “idea” just shows how fucked we are…
Gee whiz Beav, isn’t this the same sub that had an article about autonomous attack drones already doing “work” in Ukraine? A little company called Anduril out of SoCal?
Looks like the debaters are pointlessly arguing for fun and for their own substantial paychecks, as that type is wont to do. What a fucking timeline.
And they are going to ask to this weird? Of course he is family of Matt gaetz
That decision will be made by the military because they and their contractors will develop their own AI systems, not Silicon Valley.
cool. I am glad they are in charge of making that decision.
So is this how humanity is going to end? So many possibilities for us ending ourselves.
Palmer Luckey is deciding our fate? We are fucked!
The rest of society here: we’re not letting you decide
How are they even involved?
I’m not listening to philosophical debates from anyone with a soul patch.
Did we not learn anything from all the terminator and matrix movies?
Those sociopaths should not be the ones who make decisions about this.
They already are, there are missiles and drones that use AI to determine where their target is even when gps denied. They look at heat signatures and other sensor data to get their bearing and head for the (hopefully) correct target. They make a decision when presented with multiple possibilities or if the target has moved.
It’s not new technology at all.
Why is “Silicon Valley” debating anything?
“Silicon Valley” is not a person or people
Who is debating? Who specifically by name is talking about this in real conversations with other real people?
This question isn’t actually what it appears.
If a commander launches a guided missile, he does so knowing that there’s some non-zero chance that the guidance system fails and it will not hit its target. There’s also a non-zero chance that the targeting information is faulty. These may result in unintended collateral damage or casualties.
This is no different than launching a weapons system governed by AI. The commander accepts the risk in terms of probability of success prior to launch, and determines whether it meets his minimum threshold. AI has the potential to reduce inaccuracies introduced by current kill chains.
In either case, there’s still an accountable human at the end of the kill chain, which alleviates most people’s moral and ethical qualms about the whole thing.
Is it more profitable for AI to kill or not to kill?
Ahh yes the military industrial complex is military industrial complexing. It’s only a matter of time before FAANG produces warfare hardware (they already do software)
I mean this will lead to a society like Gundam Wing. We will unethically mentally torture 5 youths into horrific super soldier tactics, take away their humanity, to fight autonomous robots and restore the Sank Kingdom. The prophecy has been written.
Short answer: no
Long answer: The decision shouldn’t be up to Silicon Valley
The fact that this is even being debated just shows how fucking cracked these tech billionaires are. The answer to this question, for the survival of humanity, should always and unequivocally be… FUCKING HELL NO!!!
IDIOTS!!
Where is there not a GLOBAL UNIVERSAL ARTIFICIAL INTELLIGENCE CODE OF ETHICS?
These mother fuckers have lost their GAWD DAMNED MINDS!!
Silicon Valley will say no, but does that stop enemies of the West from doing it?
Unfortunately the cat is out of the bag, Silicon Valley don’t make AI murder drones, the military industrial complex does that.
Isn’t that one of the plot lines in the classic sci-fi novel Don’t Create the Torment Nexus?
No. The answer is no. Could someone please tell them?
What Silicon Valley thinks doesn’t fucking matter in the slightest. All it takes is one random person to give AI that capability, and then everyone will be doing it. Seems like it’s only a matter of time before it’s mainstream.
“And my point to them is, where’s the moral high ground in a landmine that can’t tell the difference between a school bus full of kids and a Russian tank?”
I’m sorry?! That’s your justification in paragraph 2!?
So the answer is: no of course if you build autonomous kill bots that kill humans you are a monster. You should be executed and go straight to hell
Now, autonomous bots that kill other robots? That’s fine. War is mostly about economy vs economy anyway
It won’t be up to the techies. It will be up to the military.
And once the military in one country concludes that autonomous weapons are the way to go because they can do more damage to the enemy faster — perhaps before the enemy has time to mount a response or even a defense — there will be little choice but for others to follow suit.
Nuclear all over again. Powerful nation-states will have them, while agreeing not to use them, and we all hope that agreement doesn’t break down.
We’re so fucked
We can watch ourselves great filter ourselves out
I hope when they build the Torment Nexus from the hit sci-if novel *Don’t Build The Torment Nexus* they remember to paint flames on it. Flames make it go faster.
SKYNET is the only one brave enough to ask the REAL questions.
I was hoping to die at a fun 100 years old before the Terminator came true. God damn.
It’s fucking pathetic that media can’t be bothered to directly call this out for what it is: Peter Thiel and his cabal of disciples all doing dystopian shit while besmirching the good names from Tolkien’s works.
Once upon a time, VCs had a “no guns, no drugs” policy for investments. Now, Thiel has managed to repackage financing weapons manufacturers as “rebuilding American industry” and many big funds are embracing it.