Editing 1958: Self-Driving Issues

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 4: Line 4:
 
| title    = Self-Driving Issues
 
| title    = Self-Driving Issues
 
| image    = self_driving_issues.png
 
| image    = self_driving_issues.png
| titletext = If most people turn into murderers all of a sudden, we'll need to push out a firmware update or something.
+
| titletext = If most people turn into muderers all of a sudden, we'll need to push out a firmware update or something.
 
}}
 
}}
  
 
==Explanation==
 
==Explanation==
  
[[Cueball]] explains being worried about {{w|autonomous car|self-driving cars}}, noting that it may be possible to fool the sensory systems of the vehicles. This is a common concern with {{w|AI}}s; since they think analytically and have little to no capability for abstract thought, they can be fooled by things a human would immediately realize is deceptive.
+
[[Cueball]] explains being worried about {{w|autonomous car|autonomous cars}}, noting that it may be possible to fool the sensory systems of the vehicles. This is a common concern with AIs; since they think analytically and have little to no capability for abstract thought, they can be fooled by things a human would immediately realize is deceptive.
  
However, Cueball quickly assumes that his argument actually doesn't hold up when comparing AI drivers to human drivers, as both rely on the same guidance framework. Human drivers follow signs and road markings, and must obey the laws of the road just as an AI must. Therefore, an attack on the road infrastructure could impact both AIs and humans. However, humans and AIs are not equally vulnerable.  For example, a fake sign or a fake child could appear to a human as an obvious fake but fool an AI. A [[Black Hat|creative attacker]] could put up a sign with CAPTCHA-like text that would be readable by humans but not by an AI.
+
However, Cueball quickly realizes that his argument actually doesn't hold up when comparing AI drivers to human drivers, as both in fact rely on the same guidance framework. Human drivers follow signs and road markings, and must obey the laws of the road just as an AI must. Therefore, attacking the road infrastructure impacts both AIs and humans equally in this case.
  
Cueball further wonders why, in this case, nobody tries to fool human drivers as they might try to fool an AI, but [[White Hat]] and [[Megan]] point out that most {{w|Road traffic safety|road safety systems}} benefit from humans not actively trying to maliciously sabotage them simply to cause accidents.{{Citation needed}}
+
Cueball further wonders why, in this case, nobody tries to fool human drivers as they might try to fool an AI, but [[White Hat]] and [[Megan]] point out the obvious sociological answer; that most {{w|Road traffic safety|road safety systems}} benefit from humans not actively trying to maliciously sabotage them.
  
The title text continues the line of reasoning, noting that if most people did suddenly become murderers, the AI might be needed to be upgraded in order to deal with the presumable increase in people trying to cause car crashes by fooling the AI - a somewhat narrowly-focused solution given that a world full of murderers would probably have many more problems than that. As Megan sees humans as a 'component' of the road safety system, it might also be suggesting a firmware update for the buggy people who have all become murderers, one that would fix their murderous ways. We are not currently at a point where we can create and apply instantaneous firmware updates for large populations; even combining all the behavioral modification tools at our disposal -- {{w|psychiatry}}, {{w|cognitive behavioral therapy}}, {{w|hypnosis}}, {{w|mind-altering drugs}}, {{w|prison}}, {{w|CRISPR}}, etc. -- is not enough to perform such a massive undertaking, as far as we know. The update might be about the car's firmware since it can be used to disable the brakes and thus causing or preventing many deaths.
+
The [[title text]] continues the line of reasoning, noting that if most people did suddenly become murderers, the AI would need to be upgraded in order to deal with the presumable increase in people trying to cause car crashes by fooling the AI - a somewhat narrowly-focused solution given that a world full of murderers would probably have many more problems than that. It might also be suggesting a firmware update for the people who have become murderers, one that would fix their murderous ways. We are not currently at a point where we can create and apply firmware updates for people, however... unless you count {{w|psychiatry}}, {{w|hypnosis}}, {{w|mind-altering drugs}}, {{w|CRISPR}}, etc.
 +
 
 +
This cartoon and [[1955: Robots]] share the common theme of human fear and overreaction to the advent of more or less autonomous robots.
  
 
==Transcript==
 
==Transcript==
Line 32: Line 34:
 
:Cueball:  Oh, right. I always forget.
 
:Cueball:  Oh, right. I always forget.
 
:Megan: An underappreciated component of our road safety system.
 
:Megan: An underappreciated component of our road safety system.
 
==Trivia==
 
The title text was published with a typo: "murderers" was misspelled as "muderers."
 
 
The theme of human fear and overreaction to the advent of more or less autonomous robots also features in [[1955: Robots]].
 
 
Self-driving cars is a [[:Category:Self-driving cars|recurring subject]] on xkcd.
 
 
A variation on the idea that humans are mentally "buggy" is suggested in [[258: Conspiracy Theories]], though in that case divine intervention is requested to implement the "firmware upgrade".
 
 
This comic appeared one day after the Electronic Frontier Foundation co-released a report titled [https://www.eff.org/deeplinks/2018/02/malicious-use-artificial-intelligence-forecasting-prevention-and-mitigation The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation]. The report cites subversions and mitigations of AI such as ones used in self-driving cars. However, the report tends toward overly technical means of subversion. Randall spoofs the tenor of the report through his mundane subversions and over-the-top mitigations.
 
 
{{comic discussion}}
 
  
 
[[Category:Comics featuring Cueball]]
 
[[Category:Comics featuring Cueball]]
Line 50: Line 39:
 
[[Category:Comics featuring Megan]]
 
[[Category:Comics featuring Megan]]
 
[[Category:Self-driving cars]]
 
[[Category:Self-driving cars]]
[[Category:Sabotage]]
+
 
 +
==Trivia==
 +
The [[title text]] was published with a typo: "murderers" was misspelled as "muderers."
 +
 
 +
{{comic discussion}}

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)