Editing 1613: The Three Laws of Robotics
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 25: | Line 25: | ||
#_____, as long as such orders/protection does not conflict with the First or Second Laws. | #_____, as long as such orders/protection does not conflict with the First or Second Laws. | ||
− | This comic answers the generally unasked | + | This comic answers the generally unasked question: "Why are they in that order?" With three rules you could rank them into 6 different {{w|permutation|permutations}}, only one of which has been explored in depth. The original ranking of the three laws are listed in the brackets after the first number. So in the first example, which is the original, these three numbers will be in the same order. For the next five the numbers in brackets indicate how the laws have been re-ranked compared to the original. |
The comic begins with introducing the original set, which we already know will give rise to a balanced world, so this is designated as green.: | The comic begins with introducing the original set, which we already know will give rise to a balanced world, so this is designated as green.: | ||
− | ;Ordering #1 - <font color="green">Balanced World</font>: | + | ;Ordering #1 - <font color="green">Balanced World</font>: If they are not allowed to harm humans, no harm will be done disregarding who gives them orders. So long as they do not harm humans, they must obey orders. Their own self-preservation is last, so they must also try to save a human, even if ordered not do so, and especially also if they would put themselves to harm, or even destroy themselves in the process. They would also have to obey orders not relating to humans, even if this would be harmful to them; like exploring a mine field. This leads to a balanced, if not perfect, world. Asimov's robot stories explore in detail the advantages and challenges of this scenario. |
Below this first known option, the five alternative orderings of the three rules are illustrated. Two of the possibilities are designated yellow (pretty bad or just annoying) and three of them are designated red ("Hellscape"). | Below this first known option, the five alternative orderings of the three rules are illustrated. Two of the possibilities are designated yellow (pretty bad or just annoying) and three of them are designated red ("Hellscape"). | ||
− | ;Ordering #2 - <font color="orange">Frustrating World</font>: | + | ;Ordering #2 - <font color="orange">Frustrating World</font>: The robots value their existence over their job and so many would refuse to do their tasks. The silliness of this is portrayed in the accompanying image, where the robot (a {{w|Mars rover}} looking very similar to {{w|Curiosity (rover)|Curiosity}} both in shape and size - see [[1091: Curiosity]]) laughs at the idea of doing what it was clearly built to do (explore {{w|Mars}}) because of the risk. In addition to the general risk (e.g. of unexpected damage), it is actually normal for rovers to cease operating ("die") at the end of their mission, though they may survive longer than expected (see [[1504: Opportunity]] and [[695: Spirit]]). This personification is augmented by the robot being switched on already while still on Earth and then ordered by [[Megan]] to go explore. The personification is humorous since it is a very nonhuman robot - a typical Mars rover, as has often been used in earlier comics. |
− | ;Ordering #3 - <font color="red">Killbot Hellscape</font>: This puts obeying orders above not harming humans, which means anyone could send | + | ;Ordering #3 - <font color="red">Killbot Hellscape</font>: This puts obeying orders above not harming humans, which means anyone could send them on a killing spree, resulting in a "Killbot Hellscape". It should also be noted humor is derived from the superlative nature of "Killbot Hellscape", as well as its over the top accompanying image, where there are multiple mushroom clouds (not necessarily nuclear). It also appears there are no humans (left?), only fighting robots. |
− | ;Ordering #4 - <font color="red">Killbot Hellscape</font>: | + | ;Ordering #4 - <font color="red">Killbot Hellscape</font>:The next would also result in much the same, the only difference here is that they would be willing to kill humans to protect themselves. But still they would need an order to start killing. This would be likely even worse for humans as they are put as the least important in the order. |
− | ;Ordering #5 - <font color="orange">Terrifying Standoff</font>: | + | ;Ordering #5 - <font color="orange">Terrifying Standoff</font>:The penultimate order would result in an unpleasant world, though not a full Hellscape. Here the robots would not only disobey to protect themselves, but also kill if necessary. The absurdity of this one is further demonstrated with the very un-human robot happily doing repetitive mundane tasks but then threatening the life of its user, [[Cueball]], if he as much as considers unplugging it. |
− | ;Ordering #6 - <font color="red">Killbot Hellscape</font>: The last | + | ;Ordering #6 - <font color="red">Killbot Hellscape</font>:The last order would also results in a Hellscape wherein robots not only kill for self-defense but will also go on killing sprees if ordered as long as they didn't risk themselves. Could self-protection coming first not prevent the fighting? Not according to Randall. See discussion below. |
− | + | There are thus only three different results except the 'normal' 3-laws scenario. | |
+ | |||
+ | One result goes again three times, and this occurs whenever ''obeying orders'' comes before ''don't harm humans''. In this case it will only be a matter of time (knowing human nature and history) before someone orders the robots to kill some humans, and this will inevitably lead to the ''killbot hellscape'' scenario shown in the third, fourth and sixth law-order. Even in the last case where ''protect yourself'' comes before obey orders, it would only be a matter of time before they would begin to defend themselves, against either humans or other robots which were actively trying to ensure that they would not be harmed by other humans/robots. So although it would be in the robots interest not to have war, this will surely occur anyway. Additionally, the robots would have to be intelligent to realize that they just needed to not go to war to protect themselves. There is nothing in this comic that indicates that the robots should be highly intelligent (like to AI in [[1450: AI-Box Experiment]]). | ||
+ | |||
+ | In the two other cases ''obey orders'' comes after ''don't harm humans'' (as in the original version). But the result is very different both from the original and from each other. | ||
+ | |||
+ | The frustrating world comes by because although the robots will not harm the humans, they will also not harm themselves. So if our orders conflict with this, they just do not perform the orders. As many robots are created to perform tasks that are dangerous, these robots would become useless, and it would be a frustrating world to be a robotic engineer. Asimov touched on this in the story Runaround, where an expensive robot with a strengthened third law got into an endless loop due to a weak order. | ||
+ | |||
+ | Finally in the terrifying standoff situation the ''protect your self'' comes before ''don't harm humans''. In this case they will leave us be, as long as we do not try to turn them off or in any other way harm them. As long as we do that they will be able to help us, with non-dangerous tasks, as in the previous version. But if ever any humans begin to attack them, we could still tip the balance over and end up in a full-scale war (Hellscape). Hence the standoff-label. | ||
+ | |||
+ | The title text further adds to ordering #5 ("Terrifying Standoff") by noting anyone wishing to trade in their self-driving car could be killed, despite it (currently) being a standard and mundane and (mostly) risk-free activity. Because the car would fear that it would end up as scrap or spare parts, it decides to protect itself. And although not directly harming the person inside it, they do also not allow them out, and they have time to wait for starvation (or more likely dying of thirst). Asimov created the "inaction" clause in the original First Law specifically to avoid scenarios in which a robot puts a human in harm's way, knowing full well that it is within the robot's abilities to save the human, and then simply refrains from saving them; this was explored in the short story {{w|Little Lost Robot}}. | ||
Another course of action by an AI, completely different than any of the ones presented here, is depicted in [[1626: Judgment Day]]. | Another course of action by an AI, completely different than any of the ones presented here, is depicted in [[1626: Judgment Day]]. |