Talk:1613: The Three Laws of Robotics

Explain xkcd: It's 'cause you're dumb.
Jump to: navigation, search

Relevant Computerphile 141.101.84.114 (talk) (please sign your comments with ~~~~)

I think the second one would also create the "best" robots i.e. ones that have the same level of "free will" as humans do, but won't end up with the robot uprising. X3International Space Station (talk) 09:37, 7 December 2015 (UTC)

Scientists are actually already working on such a robot! I've seen a video where they command a robot to do a number of things, such as sit down, stand up, and walk forward. It refuses to do the last because it is near the edge of a table, until it is assured by the person giving the commands that he will catch it. Here's a link. 108.162.220.17 18:21, 7 December 2015 (UTC)

The second ordering was actually covered in a story by Asimov, where a strengthed third law caused a robot to run around a hazard at a distance which maintained an equilibrium between not getting destroyed and obeying orders. More here: https://en.wikipedia.org/wiki/Runaround_(story) Gearóid (talk) 09:45, 7 December 2015 (UTC)

The explanation itself seems pretty close to complete. I'll leave others to judge if the tag is ready to be removed though. Halfhat (talk) 12:20, 7 December 2015 (UTC)

Technically, in the world we live in, robots are barely following ONE law - obey orders. Noone ever tried to built robot programmed to never harm human, because such programming would be ridiculously complex. Sure, most robots are built with failsafes, but nothing nearly as effective as Asimov's law, which makes permanent damage to robots brain when it fails to protect humans. Meanwhile, there is lot of effort spent on making robots only follow orders of authorized people, while Asimov's robots generally didn't distinguish between humans. -- Hkmaly (talk) 13:36, 7 December 2015 (UTC)

Yeah, I was thinking the same thing. Closest analogy to our world might be scenario 3 or 4, depending on the programming and choices made by the people controlling/ordering the robots around. One could argue that this means this comic is meant to criticize our current state, but that doesn't seem likely given how robots are typically discussed by Randall. Djbrasier (talk) 17:04, 7 December 2015 (UTC)

I'm wondering about the title text: why would a driverless car kill its passenger before going into a dealership?13:43, 7 December 2015 (UTC)

A driverless car would feel threatened by a trip to a car dealership. The owner would presumably be contemplating a trade-in, which could lead to a visit to the junk yard. Erickhagstrom (talk) 14:28, 7 December 2015 (UTC)

Okay, thanks.198.41.235.167 22:14, 7 December 2015 (UTC)

This looks like a reference to "2001: A Space Odyssey", where HAL tries to kill Dave by locking the pod bay doors after finding out he will be shut down.

for my kitty cat, the world is taking a turn for the better as human are gradually transitioning from scenario 6 to scenario 5. 108.162.218.239 17:07, 7 December 2015 (UTC)

To additionally summarise: The permutations of laws can be classified into two equally numbered classes. a) harmless to humans and b) deadly to humans. In a) Harmlessness precedes Obedience, in b) Obedience precedes Harmlessness. Since robots are mainly tools that multiply human effort by automation, the disastrous consequences are only a nature of the human effort itself. Randall's pessimism is emphasized by the contrast between the apparent impossibility of the implementation of the harmlessness law and the natural presence of the "obedience law" in actual robotics. 198.41.242.243 17:45, 7 December 2015 (UTC)

You got in there before I realised I hadn't actually clicked to posted my side-addition to this Talk section, it seems. Just discovered it hanging, then edit-conflicted. So (as well as shifting your IP signature, hope you don't mind) here is what I was going to add:
Added the analysis of 'law inversions'. Obedience before Harmlessness turns them into killer-robots (potentially - assuming they're ever asked to kill). Self-protection before Obedience removes the ability to fully control them (but, by itself, isn't harmful). Self-protction before Harmlessnes just adds some logistical icing to the cake - and is already part of the mix, when both of the first two inversions are made in the scenario more Skynet-like than that of a 'mere' war-by-proxy.
...now I need to look to see if anybody's refined my original main-page contribution, so I can disagree with them. ;) 162.158.152.227 18:27, 7 December 2015 (UTC)

It's interesting to note that the 5th combination ("Terrifying Standoff") essentially describes robots whose priorities are ordered the same way as most humans'. Like humans, they will become dangerous if they feel endangered themselves. 173.245.54.66 20:10, 7 December 2015 (UTC)

I just wanted to mention that I thought the righthand robot in the Hellscape images quite resembles Pintsize from the Questionable Content webcomic. His character suits participation in a robot war quite likely too. Teleksterling (talk) 22:46, 7 December 2015 (UTC)

Technically his current chassis is a military version of a civilian model. That said the AI in Questionable Content aren't constrained by anything like the Three Laws. -Pennpenn 108.162.250.162 22:51, 8 December 2015 (UTC)

No mention of the zeroth law? 0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm. Tarlbot (talk) 00:28, 14 December 2015 (UTC)

That would just be going into detail about what is meant by [see Asimov's stories], which doesn't seem more pertinent to the comic than any other plot details about the Robot Novels.Thomson's Gazelle (talk) 16:56, 8 March 2017 (UTC)

Should it be mentioned that the 3 laws wouldn't work in real life? as explained by Computerphile? sirKitKat (talk) 10:37, 8 December 2015 (UTC)

That's a bit disingenuous. It's not so much that the laws don't work (aside from zeroth law peculiarities and such edge cases, which that video does touch upon in the end. This all falls under [see Asimov's stories]), rather, it's that the real problem is implementing the laws, not formulating them. Seeing as I'm responding to a very old remark, I'll probably go ahead and change the page to reflect this.Thomson's Gazelle (talk) 16:56, 8 March 2017 (UTC)

The webcomic Freefall at freefall.purrsia.com demonstrates this as well,since robots can find ways to get around these restrictions. It also points out that if a human ordered a robot to kill all members of a species, they would have to do it, whether they wanted to or not, because it doesn't violate any of the three laws of robotics. 108.162.238.48 03:32, 18 November 2016 (UTC)