Editing 1613: The Three Laws of Robotics

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 8: Line 8:
  
 
==Explanation==
 
==Explanation==
This comic explores alternative orderings of sci-fi author {{w|Isaac Asimov|Isaac Asimov's}} famous {{w|Three Laws of Robotics}}, which are designed to prevent robots from taking over the world, etc. These laws form the basis of a number of Asimov works of fiction, including most famously, the short story collection ''{{w|I, Robot}}'', which amongst others includes the very first of Asimov's stories to introduce the three laws: {{w|Runaround (story)|Runaround}}.
+
{{incomplete|Very basic first draft, and I'm pretty inexperienced - please also check spelling}}
 +
This comic explores alternative orderings of sci-fi author {{w|Isaac Asimov|Isaac Asimov's}} famous {{w|Three Laws of Robotics}}, which are designed to prevent robots from taking over the world, etc. These laws form the basis of a number of Asimov works of fiction, including most famously, the short story collection ''{{w|I, Robot}}'', which amongst other include the very first of Asimov's stories to introduce the three laws, {{w|Runaround (story)|Runaround}}.  
  
 
The three rules are:
 
The three rules are:
#A robot may not injure a human being or, through inaction, allow a human being to come to harm.
+
#A robot may not injure a human being or, through inaction, allow a human being to come to harm.  
 
#A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
 
#A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
 
#A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
 
#A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
  
In order to make his joke, [[Randall]] shortens the laws into three imperatives:
+
Or in [[Randall|Randall's]] version:
 
#Don't harm humans
 
#Don't harm humans
 
#Obey Orders
 
#Obey Orders
 
#Protect yourself
 
#Protect yourself
  
And then implicitly adds the following to the end of each law regardless of order of imperatives:
+
This comic answers the generally unasked question: "Why are they in that order?" With three rules you could rank them into 6 different sets, only one of which has been explored in depth.
#''[end of statement]''
 
#_____, except where such orders/protection would conflict with the First Law.
 
#_____, as long as such orders/protection does not conflict with the First or Second Laws.
 
  
This comic answers the generally unasked{{citation needed}} question: "Why are they in that order?" With three rules you could rank them into 6 different {{w|permutation|permutations}}, only one of which has been explored in depth. The original ranking of the three laws are listed in the brackets after the first number. So in the first example, which is the original, these three numbers will be in the same order. For the next five the numbers in brackets indicate how the laws have been re-ranked compared to the original.
+
The comic begins with introducing the original set, which we already know will give rise to a balanced world, so this is designated as green:
 +
;Ordering #1: If they are not allowed to harm humans, no harm will be done if they fall into the hands of a mass-murderer. So long as they do not harm humans, they must obey orders. Their own self-preservation is last, so they must also try to save a human, even if ordered not do so, and especially also if they would put themselves to harm, or even destroy themselves in the process. This leads to a balanced world, explored in detail in Asimov's robot stories.  
  
The comic begins with introducing the original set, which we already know will give rise to a balanced world, so this is designated as green.:
+
Below this first known option, the five alternative orderings of the three rules are illustrated. Two of the possibilities are designated yellow (pretty bad or just annoying) and three of them are designated red ("Hellscape").
;Ordering #1 - <font color="green">Balanced World</font>: The safety of humans is placed as the top priority, superseding even a robot's preprogrammed obedience; a robot may disregard any orders they are given if that would result in harm to humans, but otherwise must obey all instructions. The "inaction" clause ensures that a robot will actively save humans in danger, and also not {{w|Little Lost Robot|place humans in hypothetical danger}} and then leave them to that fate. Their own self-preservation is placed at the lowest priority, which means they will sacrifice themselves if necessary to save a human life, and must obey orders even if they know those orders will result in their own destruction. This results in a balanced, if not perfect, world. Asimov's robot stories explore in detail the ramifications of this scenario.
 
  
Below this first known option, the five alternative orderings of the three rules are illustrated. Two of the possibilities are designated yellow (pretty bad or just annoying) and three of them are designated red ("Hellscape").
+
;Ordering #2: The robots value their existence over their job and so many would refuse to do their tasks. The silliness of this is portrayed in the accompanying image, where the robot laughs at the idea of doing what it was clearly built to do (explore Mars) because of the risk. This personification is augmented by the robot being switched on already while still on Earth and then ordered by [[Megan]] to go explore. The personification is humorous since it is a very nonhuman robot - a typical Mars rover, as has often been used in earlier comics.
 +
;Ordering #3: This puts obeying orders above not harming humans, which means anyone could send them on a killing spree, resulting in a "Killbot Hellscape". It should also be noted humor is derived from the superlative nature of "Killbot Hellscape", as well as its over the top accompanying image, where there are multiple mushroom clouds (not necessarily nuclear).  It also appears there are no humans, only robots.
 +
;Ordering #4:The next would also result in much the same, the only difference here is that they would be willing to kill humans to protect themselves.
 +
;Ordering #5:The penultimate  would result in a unpleasant world, though not a full Hellscape, where the robots would not only disobey to protect themselves, but also kill if necessary. The absurdity of this one is further demonstrated with the very un-human robot happily doing repetitive mundane tasks but then threatening the life of it's user, [[Cueball]], if he as much as considers unplugging the robot.
 +
;Ordering #6:The last also results in a Hellscape wherein robots not only kill for self defense but will also go on killing sprees if ordered as long as they didn't risk themselves. It is interesting to note that this case may not be correct. The writer seems to have missed the fact that an order to go kill a person or a robot might be dangerous, and thus most robots would likely disobey them in the interest of self-preservation. In fact, the robots may likely not do anything at all, because moving a moving part degrades it, and thus taking any action at all might violate self-preservation. On the other hand if the other robots are ordered to destroy you, and you cannot be sure that they will not do it, then better to protect your self by going on a killing spree, and then we are back to a realistic hellscape scenario anyway.
 +
 
 +
To summarize: There are two main distinctions between the 'normal' 3-laws and the variations.  The first is where Self-protection is put ahead of Obedience.  This results in a world where robots may be considered no longer the useful workers for humanity that they are supposed to be.  The second is where Obedience supercedes Harmlessness, and means that robots are ''threats'' to humanity (although only if they are ever given the order to be so).
  
;Ordering #2 - <font color="orange">Frustrating World</font>: Human safety is still top priority, so there is no danger to humans; however, the priority of self-preservation is now placed above obedience, which means that the robots value their existence over their job and so many would refuse to do their tasks. The silliness of this is portrayed in the accompanying image, where the robot (a {{w|Mars rover}} looking very similar to {{w|Curiosity (rover)|Curiosity}} both in shape and size - see [[1091: Curiosity]]) laughs at the idea of doing what it was clearly built to do (explore {{w|Mars}}) because of the risk. In addition to the general risk (e.g. of unexpected damage), it is actually normal for rovers to cease operating ("die") at the end of their mission, though they may survive longer than expected (see [[1504: Opportunity]] and [[695: Spirit]]).
+
The former, alone, merely creates frustration, in one scenario. The latter, alone, allows humans to use robots as their proxies for warfare, as per two scenarios - although the hellscape could be 'easily' avoided if nobody bothers to order to start (or continue) military action, but knowing the state of humans affair, this scenario is not realistic. Terrorist would love to have robots they could order to kill all infidels. Both ''together'' upgrade both the frustration and warfare aspects, creating 'unstoppable killing machines' - our only hope is that nobody ''ever'' orders them into killing-mode, or gives them cause to consider themselves under threat, resulting in an uneasy peace on the perpetual edge of tipping over into war.
;Ordering #3 - <font color="red">Killbot Hellscape</font>: This puts obeying orders above not harming humans, which means anyone could send a robot on a killing spree. Given human nature, it will probably only be a matter of time before this happens. Even worse, if the robot prioritizes obeying orders above human safety, it may try to kill any human who would prevent it from fulfilling those orders, even the person who originally gave them. Given the superior abilities of robots, the most effective way to stop them would be to counter them with other robots, which would quickly escalate to a "Killbot Hellscape" scenario where robots kill indiscriminately without any thought for human life or self-preservation.
 
;Ordering #4 - <font color="red">Killbot Hellscape</font>: This is much the same as #3, except even worse as robots would also be able to kill humans in order to protect themselves. This means that even robots not engaged in combat might still murder humans if their existence is threatened. It would be a very dangerous world for humans to live in.
 
;Ordering #5 - <font color="orange">Terrifying Standoff</font>:This ordering would result in an unpleasant world, though not necessarily a full Hellscape. Here the robots would not only disobey to protect themselves, but also kill if necessary. The absurdity of this one is further demonstrated with the very un-human robot happily doing repetitive mundane tasks but then threatening the life of its user, [[Cueball]], if he as much as considers unplugging it.
 
;Ordering #6 - <font color="red">Killbot Hellscape</font>: The last ordering puts self-protection first, which allows robots to go on killing sprees as long as doing so wouldn't cause them to come to harm. While not as bad as the Hellscapes in #3 and #4, this is still not good news for humans, as a robot can easily kill a human without risk to itself. A human also cannot use a robot to defend it from another robot, as robots can refuse combats that involve risk to themselves - this means a robot would happily stand by and allow its human master to be killed. According to Randall, this still eventually results in the Killbot Hellscape scenario.
 
  
The title text shows a further horrifying consequence of ordering #5 ("Terrifying Standoff"), by noting that a self-driving car could elect to kill anyone wishing to trade it in. Since cars aren't designed to kill humans, one way it could achieve this without any risk to itself is by locking the doors (which it would likely have control over, as part of its job) and then simply doing nothing at all. Humans require food and water to live, so denying the passenger access to these will eventually kill them, removing the threat to the car's existence. This would result in a horrible, drawn-out death for the passenger, if they cannot escape the car. It should be noted that although the car asked how long humans take to starve, the human would die of dehydration first. In his original formulation of the First Law, Asimov created the "inaction" clause specifically to avoid scenarios in which a robot puts a human in harm's way and refuses to save them; this was explored in the short story {{w|Little Lost Robot}}.
+
The third 'law inversion', with Self-protection being put ahead of Harmlessness, is necessarily inherent in the 'worst' Killbot Hellscape scenario, whilst really only adds a nuance between the first two Hellscape scenarios, where the orders themselves are not explicitly anti-human.
  
Another course of action by an AI, completely different than any of the ones presented here, is depicted in [[1626: Judgment Day]].
+
The title text further adds to ordering #5 by noting anyone wishing to trade in their self-driving car could be killed, despite it (currently) being a standard and mundane and (mostly) risk-free activity. Because the car would fear that it would end up as scrap or spare parts, it decides to protect itself. And although not directly harming the person inside it, they do also not allow them out, and they have time to wait for starvation (or rather dying of thirst).
  
 
==Transcript==
 
==Transcript==
Line 49: Line 49:
 
:[Below are six rows with first two frames and then a label in color to the right. Above the two column of frames there are labels as well. In the first column six different ways of ordering the three laws are listed. Then the second column shown an image of the consequences of this order. Except in the first where there is a reference. The label to the right rates the kind of world that order of the laws would result in.]
 
:[Below are six rows with first two frames and then a label in color to the right. Above the two column of frames there are labels as well. In the first column six different ways of ordering the three laws are listed. Then the second column shown an image of the consequences of this order. Except in the first where there is a reference. The label to the right rates the kind of world that order of the laws would result in.]
  
:[Labels above the columns.]
+
:[Labels above the columns]
:Possible ordering
+
:Possible ordering  
 
:Consequences
 
:Consequences
  
Line 57: Line 57:
 
:[First row:]
 
:[First row:]
 
:1. (1) Don't harm humans
 
:1. (1) Don't harm humans
:2. (2) Obey Orders
+
:2. (2) Obey Orders
 
:3. (3) Protect yourself
 
:3. (3) Protect yourself
 
:[Only text in square brackets:]
 
:[Only text in square brackets:]
::[See Asimov’s stories]
+
::[See Asmiov’s stories]
 
:<font color="green">'''Balanced world'''</font>
 
:<font color="green">'''Balanced world'''</font>
  
Line 66: Line 66:
 
:1. (1) Don't harm humans
 
:1. (1) Don't harm humans
 
:2. (3) Protect yourself
 
:2. (3) Protect yourself
:3. (2) Obey Orders
+
:3. (2) Obey Orders
:[Megan points at a mars rover with six wheels, a satellite disc, an arm and a camera head turned towards her, what to do.]
+
:[Megan points at a mars rower  with six wheels, a satellite disc, an arm and a camera head turned towards her, what to do.]
 
:Megan: Explore Mars!
 
:Megan: Explore Mars!
:Mars rover: Haha, no. It’s cold and I’d die.
+
:Mars rower: Haha, no. It’s cold and I’d die.
 
:<font color="orange">'''Frustrating world'''</font>
 
:<font color="orange">'''Frustrating world'''</font>
  
 
:[Third row:]
 
:[Third row:]
:1. (2) Obey Orders
+
:1. (2) Obey Orders
 
:2. (1) Don't harm humans
 
:2. (1) Don't harm humans
 
:3. (3) Protect yourself
 
:3. (3) Protect yourself
Line 80: Line 80:
  
 
:[Fourth row:]
 
:[Fourth row:]
:1. (2) Obey Orders
+
:1. (2) Obey Orders
 
:2. (3) Protect yourself
 
:2. (3) Protect yourself
 
:3. (1) Don't harm humans:
 
:3. (1) Don't harm humans:
Line 89: Line 89:
 
:1. (3) Protect yourself
 
:1. (3) Protect yourself
 
:2. (1) Don't harm humans
 
:2. (1) Don't harm humans
:3. (2) Obey Orders
+
:3. (2) Obey Orders
:[Cueball is standing in front of a car factory robot, that is larger than him. It has a base, and two parts for the main body, and then a big “head” with a small section on top. To the right something is jutting out, and to the left in the direction of Cueball there is an arm in three sections (going down, up and down again) ending in some kind of tool close to Cueball.]
+
:[Cueball is standing in front of a car factory robot, that are larger than him. It has a base, and two parts for the main body, and then a big “head” with a small section on top. To the right something is jutting out, and to the left in the direction of Cueball there is an arm in three sections (going down, up and down again) ending in some kind of tool close to Cueball.]
 
:Car factory robot: I'll make cars for you, but try to unplug me and I’ll vaporize you.
 
:Car factory robot: I'll make cars for you, but try to unplug me and I’ll vaporize you.
 
:<font color="orange">'''Terrifying standoff'''</font>
 
:<font color="orange">'''Terrifying standoff'''</font>
Line 96: Line 96:
 
:[Sixth row:]
 
:[Sixth row:]
 
:1. (3) Protect yourself
 
:1. (3) Protect yourself
:2. (2) Obey Orders
+
:2. (2) Obey Orders
 
:3. (1) Don't harm humans:
 
:3. (1) Don't harm humans:
 
:[Exactly the same picture as in row 3 and 4.]
 
:[Exactly the same picture as in row 3 and 4.]
Line 108: Line 108:
 
[[Category:Artificial Intelligence]]
 
[[Category:Artificial Intelligence]]
 
[[Category:Robots]]
 
[[Category:Robots]]
[[Category:Mars rovers]]
 

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)