Editing 1958: Self-Driving Issues

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 9: Line 9:
 
==Explanation==
 
==Explanation==
  
βˆ’
[[Cueball]] explains being worried about {{w|autonomous car|self-driving cars}}, noting that it may be possible to fool the sensory systems of the vehicles. This is a common concern with {{w|AI}}s; since they think analytically and have little to no capability for abstract thought, they can be fooled by things a human would immediately realize is deceptive.
+
[[Cueball]] explains being worried about {{w|autonomous car|self-driving cars}}, noting that it may be possible to fool the sensory systems of the vehicles. This is a common concern with {{w|AI|AIs}}; since they think analytically and have little to no capability for abstract thought, they can be fooled by things a human would immediately realize is deceptive.
  
 
However, Cueball quickly assumes that his argument actually doesn't hold up when comparing AI drivers to human drivers, as both rely on the same guidance framework. Human drivers follow signs and road markings, and must obey the laws of the road just as an AI must. Therefore, an attack on the road infrastructure could impact both AIs and humans. However, humans and AIs are not equally vulnerable.  For example, a fake sign or a fake child could appear to a human as an obvious fake but fool an AI. A [[Black Hat|creative attacker]] could put up a sign with CAPTCHA-like text that would be readable by humans but not by an AI.
 
However, Cueball quickly assumes that his argument actually doesn't hold up when comparing AI drivers to human drivers, as both rely on the same guidance framework. Human drivers follow signs and road markings, and must obey the laws of the road just as an AI must. Therefore, an attack on the road infrastructure could impact both AIs and humans. However, humans and AIs are not equally vulnerable.  For example, a fake sign or a fake child could appear to a human as an obvious fake but fool an AI. A [[Black Hat|creative attacker]] could put up a sign with CAPTCHA-like text that would be readable by humans but not by an AI.

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)