Difference between revisions of "1613: The Three Laws of Robotics"

Explain xkcd: It's 'cause you're dumb.
Jump to: navigation, search
(Explanation: I hope this is as close to the standards as expected.)
(Explanation: Very minor extra detail, I hope I've not gone too far.)
Line 8: Line 8:
  
 
==Explanation==
 
==Explanation==
{{incomplete|Very basic first draft, and I'm pretty inexperienced [[User:Halfhat|Halfhat]] ([[User talk:Halfhat|talk]]) 09:38, 7 December 2015 (UTC)}}
+
{{incomplete|Very basic first draft, and I'm pretty inexperienced [[User:Halfhat|Halfhat]] ([[User talk:Halfhat|talk]]) 09:38, 7 December 2015 (UTC) you should also check my awfull spelling[[User:Halfhat|Halfhat]] ([[User talk:Halfhat|talk]]) 09:46, 7 December 2015 (UTC)}}
 
Thye joke here is that any other order of the three laws of robotics results in riduclas worlds, two of these are given orange (pretty bad) and three results are designated red ("hellscape").
 
Thye joke here is that any other order of the three laws of robotics results in riduclas worlds, two of these are given orange (pretty bad) and three results are designated red ("hellscape").
  
The first alternate one (ordering #2) makes the robots value their existense over their job and so would make many much less functional, the sillyness of this is protrayed in the accopanying image, where the robot laughs at the idea of doing what it was clearly built to do (explore mars) because of the risk, the personification is also humourous since it is a very nonhuman robot.  
+
The first alternate one (ordering #2) makes the robots value their existence over their job and so would make many much less functional. The sillyness of this is protrayed in the accopanying image, where the robot laughs at the idea of doing what it was clearly built to do (explore mars), because of the risk. The personification is also humourous since it is a very nonhuman robot, being switched on on earth and ordered by the fleshy human known as [[Megan]], adds to the personification.  
  
The next possible ordering (ordering #3) puts obeying orders above not harming humans which means anyone could send them on a killing spree, resulting in a "killbot hellscape"; It should also be noted humour is derrived from the superlative nature of "Killbot Hellscape" as well as it's over the top accompanying image. The next (ordering #4) would also result in much the same, the only difference here is that they would be willing to kill humans to protect themselves.  
+
The next possible ordering (ordering #3) puts obeying orders above not harming humans which means anyone could send them on a killing spree, resulting in a "killbot hellscape"; It should also be noted humour is derrived from the superlative nature of "Killbot Hellscape" as well as it's over the top accompanying image, where there are multiple mushroom clouds (not necessarily nuclear), it also appears there are not humans, onlty robots. The next (ordering #4) would also result in much the same, the only difference here is that they would be willing to kill humans to protect themselves.  
  
The panultimate (ordering #5) would result in a unpleasant world, though not a full hellscape, where the robots would not only disobey to protect themselves, but also kill if nessasary. The absurdity of this one is further demonstated with the very unhuman robot happilly doing repititive mundane tasks but then threatning it's user.  
+
The panultimate (ordering #5) would result in a unpleasant world, though not a full hellscape, where the robots would not only disobey to protect themselves, but also kill if nessasary. The absurdity of this one is further demonstated with the very unhuman robot happilly doing repititive mundane tasks but then threatning it's user, the terrified relic of the age of men known as [[cueball]].  
  
 
The last (ordering #6) also results in a hellscape wherein robots not only kill for self defense but will also go on killing sprees if ordered as long as they didn't risk themselves.
 
The last (ordering #6) also results in a hellscape wherein robots not only kill for self defense but will also go on killing sprees if ordered as long as they didn't risk themselves.

Revision as of 09:46, 7 December 2015

The Three Laws of Robotics
In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death.
Title text: In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death.

Explanation

Ambox notice.png This explanation may be incomplete or incorrect: Very basic first draft, and I'm pretty inexperienced Halfhat (talk) 09:38, 7 December 2015 (UTC) you should also check my awfull spellingHalfhat (talk) 09:46, 7 December 2015 (UTC)
If you can address this issue, please edit the page! Thanks.

Thye joke here is that any other order of the three laws of robotics results in riduclas worlds, two of these are given orange (pretty bad) and three results are designated red ("hellscape").

The first alternate one (ordering #2) makes the robots value their existence over their job and so would make many much less functional. The sillyness of this is protrayed in the accopanying image, where the robot laughs at the idea of doing what it was clearly built to do (explore mars), because of the risk. The personification is also humourous since it is a very nonhuman robot, being switched on on earth and ordered by the fleshy human known as Megan, adds to the personification.

The next possible ordering (ordering #3) puts obeying orders above not harming humans which means anyone could send them on a killing spree, resulting in a "killbot hellscape"; It should also be noted humour is derrived from the superlative nature of "Killbot Hellscape" as well as it's over the top accompanying image, where there are multiple mushroom clouds (not necessarily nuclear), it also appears there are not humans, onlty robots. The next (ordering #4) would also result in much the same, the only difference here is that they would be willing to kill humans to protect themselves.

The panultimate (ordering #5) would result in a unpleasant world, though not a full hellscape, where the robots would not only disobey to protect themselves, but also kill if nessasary. The absurdity of this one is further demonstated with the very unhuman robot happilly doing repititive mundane tasks but then threatning it's user, the terrified relic of the age of men known as cueball.

The last (ordering #6) also results in a hellscape wherein robots not only kill for self defense but will also go on killing sprees if ordered as long as they didn't risk themselves.

The titletext further adding to ordering #5 by noting anyone wishing to trade in their selfdriving car could be killed, despite it (currently) being a standard and mundane and (mostly) risk free activity.

Transcript

Ambox notice.png This transcript is incomplete. Please help editing it! Thanks.


comment.png add a comment! ⋅ comment.png add a topic (use sparingly)! ⋅ Icons-mini-action refresh blue.gif refresh comments!

Discussion

Relevant Computerphile 141.101.84.114 (talk) (please sign your comments with ~~~~)

I think the second one would also create the "best" robots i.e. ones that have the same level of "free will" as humans do, but won't end up with the robot uprising. X3International Space Station (talk) 09:37, 7 December 2015 (UTC)

Scientists are actually already working on such a robot! I've seen a video where they command a robot to do a number of things, such as sit down, stand up, and walk forward. It refuses to do the last because it is near the edge of a table, until it is assured by the person giving the commands that he will catch it. Here's a link. 108.162.220.17 18:21, 7 December 2015 (UTC)

The second ordering was actually covered in a story by Asimov, where a strengthed third law caused a robot to run around a hazard at a distance which maintained an equilibrium between not getting destroyed and obeying orders. More here: https://en.wikipedia.org/wiki/Runaround_(story) Gearóid (talk) 09:45, 7 December 2015 (UTC)

The explanation itself seems pretty close to complete. I'll leave others to judge if the tag is ready to be removed though. Halfhat (talk) 12:20, 7 December 2015 (UTC)

Technically, in the world we live in, robots are barely following ONE law - obey orders. Noone ever tried to built robot programmed to never harm human, because such programming would be ridiculously complex. Sure, most robots are built with failsafes, but nothing nearly as effective as Asimov's law, which makes permanent damage to robots brain when it fails to protect humans. Meanwhile, there is lot of effort spent on making robots only follow orders of authorized people, while Asimov's robots generally didn't distinguish between humans. -- Hkmaly (talk) 13:36, 7 December 2015 (UTC)

Yeah, I was thinking the same thing. Closest analogy to our world might be scenario 3 or 4, depending on the programming and choices made by the people controlling/ordering the robots around. One could argue that this means this comic is meant to criticize our current state, but that doesn't seem likely given how robots are typically discussed by Randall. Djbrasier (talk) 17:04, 7 December 2015 (UTC)

I'm wondering about the title text: why would a driverless car kill its passenger before going into a dealership?13:43, 7 December 2015 (UTC)

A driverless car would feel threatened by a trip to a car dealership. The owner would presumably be contemplating a trade-in, which could lead to a visit to the junk yard. Erickhagstrom (talk) 14:28, 7 December 2015 (UTC)

Okay, thanks.198.41.235.167 22:14, 7 December 2015 (UTC)

This looks like a reference to "2001: A Space Odyssey", where HAL tries to kill Dave by locking the pod bay doors after finding out he will be shut down.

for my kitty cat, the world is taking a turn for the better as human are gradually transitioning from scenario 6 to scenario 5. 108.162.218.239 17:07, 7 December 2015 (UTC)

To additionally summarise: The permutations of laws can be classified into two equally numbered classes. a) harmless to humans and b) deadly to humans. In a) Harmlessness precedes Obedience, in b) Obedience precedes Harmlessness. Since robots are mainly tools that multiply human effort by automation, the disastrous consequences are only a nature of the human effort itself. Randall's pessimism is emphasized by the contrast between the apparent impossibility of the implementation of the harmlessness law and the natural presence of the "obedience law" in actual robotics. 198.41.242.243 17:45, 7 December 2015 (UTC)

You got in there before I realised I hadn't actually clicked to posted my side-addition to this Talk section, it seems. Just discovered it hanging, then edit-conflicted. So (as well as shifting your IP signature, hope you don't mind) here is what I was going to add:
Added the analysis of 'law inversions'. Obedience before Harmlessness turns them into killer-robots (potentially - assuming they're ever asked to kill). Self-protection before Obedience removes the ability to fully control them (but, by itself, isn't harmful). Self-protction before Harmlessnes just adds some logistical icing to the cake - and is already part of the mix, when both of the first two inversions are made in the scenario more Skynet-like than that of a 'mere' war-by-proxy.
...now I need to look to see if anybody's refined my original main-page contribution, so I can disagree with them. ;) 162.158.152.227 18:27, 7 December 2015 (UTC)

It's interesting to note that the 5th combination ("Terrifying Standoff") essentially describes robots whose priorities are ordered the same way as most humans'. Like humans, they will become dangerous if they feel endangered themselves. 173.245.54.66 20:10, 7 December 2015 (UTC)

I just wanted to mention that I thought the righthand robot in the Hellscape images quite resembles Pintsize from the Questionable Content webcomic. His character suits participation in a robot war quite likely too. Teleksterling (talk) 22:46, 7 December 2015 (UTC)

Technically his current chassis is a military version of a civilian model. That said the AI in Questionable Content aren't constrained by anything like the Three Laws. -Pennpenn 108.162.250.162 22:51, 8 December 2015 (UTC)

No mention of the zeroth law? 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm. Tarlbot (talk) 00:28, 14 December 2015 (UTC)

That would just be going into detail about what is meant by [see Asimov's stories], which doesn't seem more pertinent to the comic than any other plot details about the Robot Novels.Thomson's Gazelle (talk) 16:56, 8 March 2017 (UTC)

Should it be mentioned that the 3 laws wouldn't work in real life? as explained by Computerphile? sirKitKat (talk) 10:37, 8 December 2015 (UTC)

That's a bit disingenuous. It's not so much that the laws don't work (aside from zeroth law peculiarities and such edge cases, which that video does touch upon in the end. This all falls under [see Asimov's stories]), rather, it's that the real problem is implementing the laws, not formulating them. Seeing as I'm responding to a very old remark, I'll probably go ahead and change the page to reflect this.Thomson's Gazelle (talk) 16:56, 8 March 2017 (UTC)

The webcomic Freefall at freefall.purrsia.com demonstrates this as well,since robots can find ways to get around these restrictions. It also points out that if a human ordered a robot to kill all members of a species, they would have to do it, whether they wanted to or not, because it doesn't violate any of the three laws of robotics. 108.162.238.48 03:32, 18 November 2016 (UTC)