1958: Self-Driving Issues

Explain xkcd: It's 'cause you're dumb.
Revision as of 10:02, 21 February 2018 by (talk) (Broke Cueball line into multiple lines, one for each panel.w)
Jump to: navigation, search
Self-Driving Issues
If most people turn into muderers all of a sudden, we'll need to push out a firmware update or something.
Title text: If most people turn into muderers all of a sudden, we'll need to push out a firmware update or something.


Ambox notice.png This explanation may be incomplete or incorrect: Created by a BOT - Please change this comment when editing this page. Do NOT delete this tag too soon.
If you can address this issue, please edit the page! Thanks.
Cueball explains being worried about autonomous cars, noting that it may be possible to fool the sensory systems of the vehicles. He then realizes that human drivers are equally vulnerable to being deceived.

White Hat and Megan point out that most road safety systems benefit from humans not actively trying to maliciously sabotage it.

The title text notes that a radical change in human behavior would likely require a major update to the software that governs how autonomous vehicles behave. Presumably because it would now have to account for humans actively attempting to kill drivers on a routine basis.

Fun Fact: The title text was published with a typo, "murderers" was misspelled as "muderers."


Ambox notice.png This transcript is incomplete. Please help editing it! Thanks.

Cueball: I worry about self-driving car safety features.

Cueball: What's to stop someone from painting fake lines on the road, or dropping a cutout of a pedestrian onto a highway, to make cars swerve and crash?

Cueball: Except... those things would also work on human drivers. What's stopping people now?

Off-panel speaker (probably Megan): Yeah, causing car crashes isn't hard.

White Hat: I guess it's just that most people aren't murderers?

Cueball: Oh, right. I always forget.

Megan: An underappreciated component of our road safety system.

comment.png add a comment! ⋅ comment.png add a topic (use sparingly)! ⋅ Icons-mini-action refresh blue.gif refresh comments!


I assume the off-panel speaker is Megan, based on their positioning, but not sure what the ruling on the ambiguity is. PvOberstein (talk) 05:47, 21 February 2018 (UTC)

I made a note about the typo in the title text. Also, weird question, does the "created by a BOT" tag mean that the explanation was written by an AI? Or is it a joke I'm missing for some reason? Sorry, kind of a dumb question I guess. 09:04, 21 February 2018 (UTC)

Afaik the "created by a BOT" part is the default text when the bot which is crawling xkcd for a new comic inserts the comic here (and an empty explanation). In the past that part was often deleted when the first real edit was made. Some comics ago a habit evolved to actually change that line in relation to the comic at hand (e.g. "created by a SELF-DRIVING CAR" would be fitting here). Elektrizikekswerk (talk) 11:23, 21 February 2018 (UTC)
Oh, okay...19:06, 21 February 2018 (UTC)
It refers to the incomplete tag. The incomplete tag is created by a bot and just shows that there needs to be an explaination. Lupo (talk) 19:09, 25 February 2018 (UTC)

Could the firmware update be for the humans, because they are obviously malfunctioning in the scenario? Sebastian -- 09:46, 21 February 2018 (UTC)

Of course, that's the joke. It would be really impractical to install such a firmware update, there are about 7.5 billion people on earth - many of which we don't even have access to. I'd also suspect that most people would fight back if you tried shoving a USB flash drive into them. 11:24, 21 February 2018 (UTC)
Yeah, but how many of those 7.5 billion people are a safety risk for self-driving cars, though? LordHorst (talk) 12:59, 21 February 2018 (UTC)
My best guess is 12. 14:50, 21 February 2018 (UTC)
Huh, I never even realized this interpretation. :) Hawthorn (talk) 15:09, 21 February 2018 (UTC)
But you wouldn't use a flash drvie - it would be an OTA update. 11:53, 22 February 2018 (UTC)
Funny, but now, as the explanation now states it is the cars that need to be updated to take this new behavior in to context. It is unclear if the cars should behave like humans, as Cueball mentions they already do, and if so should try to use the knowledge of human behavior to save life, or if they should behave like humans and try to take lives! :-D --Kynde (talk) 13:14, 21 February 2018 (UTC)
While this version occurred to me, I feel confident that the INTENDED meaning is to go within the context of this comic, that is, that the murderers are the non-drivers Cueball is afraid of, and both human and AI drivers alike still would prefer not to crash (especially seeing as the most available murder victims would be the people in the car, both passengers and possible driver). Still, the car-owners-want-to-kill version is amusing. LOL! NiceGuy1 (talk) 05:33, 23 February 2018 (UTC)
To me it's obvious, and the explanation overthinks it. The natural "voice" of the comic here is Cueball. He surmises that if people generally become murderers, they would need a firmware update "or something". Obviously he is taking "people" to be some sort of product, and the update to be the manufacturer's obligation. There is obvious religious precedent for this, whether the "firmware update" involves unleashing floods, burning cities with fire, sending messaiahs or revealing truths to prophets... and that is (imho) the joke: Cueball thinks an update is needed, then he generalises that to "or something" as he starts to realise (or as the reader starts to realise, despite Cueball's obliviousness) the enormity of the consequences of such a possible update. 03:42, 3 March 2018 (UTC)
Agree with the interpretation, but I think floods, fire, etc. would be more 'retiring hardware' than a 'firmware update' 09:28, 5 March 2018 (UTC)

I would say that it's easier to fool current AI than human ... except in "quick reaction needed" scenario. If you throw cardboard cutout of a pedestrian on road, AI will be fooled because it's not able to recognize it's not human and crash. Human will crash as well, because while he will eventually realize it's cutout, it would be too late. -- Hkmaly (talk) 23:35, 21 February 2018 (UTC)

A creative attacker could put up a sign with CAPTCHA-like text that would be readable by humans but not by an AI.

Except that both humans and AIs would disregard it as not being a real sign. In fact, this would be more likely to be successful as an attack against humans, who might at least be distracted by going "what's that?", and end up crashing as a result. The AI would just completely ignore it. 11:56, 22 February 2018 (UTC)

One of the most ingrained features in humans is to always drive on the correct side of the road

Seems like rather a big claim there... 15:03, 22 February 2018 (UTC)

It still says "muderers"[sic] ... maybe it isn't just a misspelling? 22:59, 11 March 2018 (UTC)