Explain xkcd: It's 'cause you're dumb.
|The Three Laws of Robotics|
Title text: In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death.
|| This explanation may be incomplete or incorrect: Very basic first draft, and I'm pretty inexperienced Halfhat (talk) 09:38, 7 December 2015 (UTC) you should also check my awful spelling Halfhat (talk) 09:46, 7 December 2015 (UTC)|
If you can address this issue, please edit the page! Thanks.
This comic explores alternative orderings of sci-fi author Isaac Asimov's famous Three Laws of Robotics. These laws form the basis of a number of Asimov works of fiction, including most famously, "I, Robot". The comic answers the generally unasked question: "Why are they in that order?"
The joke here is that any alternative ordering of the three laws results in ridiculous worlds; two of these are designated orange (pretty bad) and three results are designated red ("hellscape").
- Ordering #1
- This is the original ordering.
- Ordering #2
- The robots value their existence over their job and so would make many much less functional. The silliness of this is portrayed in the accompanying image, where the robot laughs at the idea of doing what it was clearly built to do (explore mars), because of the risk. This personification is augmented by the robot being switched on on earth and ordered by the fleshy human known as Megan. The personification is humorous since it is a very nonhuman robot.
- Ordering #3
- This puts obeying orders above not harming humans which means anyone could send them on a killing spree, resulting in a "killbot hellscape"; It should also be noted humour is derived from the superlative nature of "Killbot Hellscape", as well as its over the top accompanying image, where there are multiple mushroom clouds (not necessarily nuclear), it also appears there are no humans, only robots. This is the world we live in.
- Ordering #4
- The next would also result in much the same, the only difference here is that they would be willing to kill humans to protect themselves.
- Ordering #5
- The penultimate would result in a unpleasant world, though not a full hellscape, where the robots would not only disobey to protect themselves, but also kill if necessary. The absurdity of this one is further demonstrated with the very unhuman robot happily doing repetitive mundane tasks but then threatening its user, the terrified relic of the age of men known as Cueball.
- Ordering #6
- The last also results in a hellscape wherein robots not only kill for self defense but will also go on killing sprees if ordered as long as they didn't risk themselves.
The titletext further adding to ordering #5 by noting anyone wishing to trade in their self-driving car could be killed, despite it (currently) being a standard and mundane and (mostly) risk free activity.
|| This transcript is incomplete. Please help editing it! Thanks.
add a comment! ⋅ add a topic (use sparingly)! ⋅ refresh comments!
Relevant Computerphile 126.96.36.199 (talk) (please sign your comments with ~~~~)
I think the second one would also create the "best" robots i.e. ones that have the same level of "free will" as humans do, but won't end up with the robot uprising. X3International Space Station (talk) 09:37, 7 December 2015 (UTC)
- Scientists are actually already working on such a robot! I've seen a video where they command a robot to do a number of things, such as sit down, stand up, and walk forward. It refuses to do the last because it is near the edge of a table, until it is assured by the person giving the commands that he will catch it. Here's a link. 188.8.131.52 18:21, 7 December 2015 (UTC)
The second ordering was actually covered in a story by Asimov, where a strengthed third law caused a robot to run around a hazard at a distance which maintained an equilibrium between not getting destroyed and obeying orders. More here: https://en.wikipedia.org/wiki/Runaround_(story) Gearóid (talk) 09:45, 7 December 2015 (UTC)
The explanation itself seems pretty close to complete. I'll leave others to judge if the tag is ready to be removed though. Halfhat (talk) 12:20, 7 December 2015 (UTC)
Technically, in the world we live in, robots are barely following ONE law - obey orders. Noone ever tried to built robot programmed to never harm human, because such programming would be ridiculously complex. Sure, most robots are built with failsafes, but nothing nearly as effective as Asimov's law, which makes permanent damage to robots brain when it fails to protect humans. Meanwhile, there is lot of effort spent on making robots only follow orders of authorized people, while Asimov's robots generally didn't distinguish between humans. -- Hkmaly (talk) 13:36, 7 December 2015 (UTC)
- Yeah, I was thinking the same thing. Closest analogy to our world might be scenario 3 or 4, depending on the programming and choices made by the people controlling/ordering the robots around. One could argue that this means this comic is meant to criticize our current state, but that doesn't seem likely given how robots are typically discussed by Randall. Djbrasier (talk) 17:04, 7 December 2015 (UTC)
I'm wondering about the title text: why would a driverless car kill its passenger before going into a dealership?13:43, 7 December 2015 (UTC)
- A driverless car would feel threatened by a trip to a car dealership. The owner would presumably be contemplating a trade-in, which could lead to a visit to the junk yard. Erickhagstrom (talk) 14:28, 7 December 2015 (UTC)
Okay, thanks.184.108.40.206 22:14, 7 December 2015 (UTC)
- This looks like a reference to "2001: A Space Odyssey", where HAL tries to kill Dave by locking the pod bay doors after finding out he will be shut down.
for my kitty cat, the world is taking a turn for the better as human are gradually transitioning from scenario 6 to scenario 5. 220.127.116.11 17:07, 7 December 2015 (UTC)
To additionally summarise: The permutations of laws can be classified into two equally numbered classes. a) harmless to humans and b) deadly to humans. In a) Harmlessness precedes Obedience, in b) Obedience precedes Harmlessness. Since robots are mainly tools that multiply human effort by automation, the disastrous consequences are only a nature of the human effort itself. Randall's pessimism is emphasized by the contrast between the apparent impossibility of the implementation of the harmlessness law and the natural presence of the "obedience law" in actual robotics. 18.104.22.168 17:45, 7 December 2015 (UTC)
- You got in there before I realised I hadn't actually clicked to posted my side-addition to this Talk section, it seems. Just discovered it hanging, then edit-conflicted. So (as well as shifting your IP signature, hope you don't mind) here is what I was going to add:
- Added the analysis of 'law inversions'. Obedience before Harmlessness turns them into killer-robots (potentially - assuming they're ever asked to kill). Self-protection before Obedience removes the ability to fully control them (but, by itself, isn't harmful). Self-protction before Harmlessnes just adds some logistical icing to the cake - and is already part of the mix, when both of the first two inversions are made in the scenario more Skynet-like than that of a 'mere' war-by-proxy.
- ...now I need to look to see if anybody's refined my original main-page contribution, so I can disagree with them. ;) 22.214.171.124 18:27, 7 December 2015 (UTC)
It's interesting to note that the 5th combination ("Terrifying Standoff") essentially describes robots whose priorities are ordered the same way as most humans'. Like humans, they will become dangerous if they feel endangered themselves. 126.96.36.199 20:10, 7 December 2015 (UTC)
I just wanted to mention that I thought the righthand robot in the Hellscape images quite resembles Pintsize from the Questionable Content webcomic. His character suits participation in a robot war quite likely too. Teleksterling (talk) 22:46, 7 December 2015 (UTC)
- Technically his current chassis is a military version of a civilian model. That said the AI in Questionable Content aren't constrained by anything like the Three Laws. -Pennpenn 188.8.131.52 22:51, 8 December 2015 (UTC)
No mention of the zeroth law?
0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.
Tarlbot (talk) 00:28, 14 December 2015 (UTC)
- That would just be going into detail about what is meant by [see Asimov's stories], which doesn't seem more pertinent to the comic than any other plot details about the Robot Novels.Thomson's Gazelle (talk) 16:56, 8 March 2017 (UTC)
Should it be mentioned that the 3 laws wouldn't work in real life? as explained by Computerphile? sirKitKat (talk) 10:37, 8 December 2015 (UTC)
- That's a bit disingenuous. It's not so much that the laws don't work (aside from zeroth law peculiarities and such edge cases, which that video does touch upon in the end. This all falls under [see Asimov's stories]), rather, it's that the real problem is implementing the laws, not formulating them. Seeing as I'm responding to a very old remark, I'll probably go ahead and change the page to reflect this.Thomson's Gazelle (talk) 16:56, 8 March 2017 (UTC)
The webcomic Freefall at freefall.purrsia.com demonstrates this as well,since robots can find ways to get around these restrictions. It also points out that if a human ordered a robot to kill all members of a species, they would have to do it, whether they wanted to or not, because it doesn't violate any of the three laws of robotics. 184.108.40.206 03:32, 18 November 2016 (UTC)