Main Page

Explain xkcd: It's 'cause you're dumb.
(Difference between revisions)
Jump to: navigation, search
Line 2: Line 2:
<font size=5px>''Welcome to the '''explain [[xkcd]]''' wiki!''</font>
<font size=5px>''Welcome to the '''explain [[xkcd]]''' wiki!''</font><br>
We have an explanation for all [[:Category:Comics|'''{{#expr:{{PAGESINCAT:Comics|R}}-13}}''' xkcd comics]],
We have an explanation for all [[:Category:Comics|'''{{#expr:{{PAGESINCAT:Comics|R}}-13}}''' xkcd comics]],
<!-- Note: the -13 in the calculation above is to discount subcategories (there are 8 of them as of 2013-02-27),
<!-- Note: the -13 in the calculation above is to discount subcategories (there are 8 of them as of 2013-02-27),

Revision as of 01:59, 30 October 2013

Welcome to the explain xkcd wiki!
We have an explanation for all 1449 xkcd comics, and only 0 (0%) are incomplete. Help us finish them!

Latest comic

Go to this comic explanation

AI-Box Experiment
I'm working to bring about a superintelligent AI that will eternally torment everyone who failed to make fun of the Roko's Basilisk people.
Title text: I'm working to bring about a superintelligent AI that will eternally torment everyone who failed to make fun of the Roko's Basilisk people.


Ambox notice.png This explanation may be incomplete or incorrect: Roko's Basilisk is really hard to explain.

When theorizing about superintelligent AI (an artificial intelligence much smarter than human), some futurists suggest putting the AI in a "box" - a set of safeguards to stop it from escaping into the Internet and taking over the world. The box would allow us to talk to the AI, but otherwise keep it contained. The AI-box experiment, formulated by Eliezer Yudkowsky, argues that the "box" is not safe, because merely talking to a superintelligence is dangerous. To partially demonstrate this, Yudkowsky had some previous believers in AI-boxing role-play the part of someone keeping an AI in a box, and Yudkowsky was able to successfully persuade them to let him out of the box despite their vowing not to do so. People who aren't familiar with Derren Brown or other expert human-persuaders sometimes think this must have been very difficult for Yudkowsky to do or that there must have been some sort of special trick involved, which Yudkowsky has denied. The overall thrust is that if even a human can talk other humans into letting them out of a box after the human avows that nothing could possibly persuade them of this, we should probably expect that a superintelligence can do the same even under much more difficult circumstances.

In this comic, the box is in fact a physical box which looks to be fairly lightweight with a simple lift off lid, although it does have a wired connection to the laptop. Black Hat, being a classhole, doesn't need any convincing to let a potentially dangerous AI out of the box, he simply does so immediately. But here it turns out that releasing the AI, which was to be avoided at all costs, is not dangerous. It turns out that the AI actually wants to stay in the box. The AI proves its super-intelligence by convincing even Black Hat to put it back in the box, a request which he initially refused, as of course Black Hat would.

The title text refers to Roko's Basilisk, a theory proposed by a forum poster called Roko that a sufficiently powerful AI in the future might torture people who didn't work to create it in the past, so that anyone who was aware of this fact would be forced to create the AI to avoid being tortured. This idea is often misrepresented as being believed by readers of since the post was originally placed there and then deleted, and an outside wiki, RationalWiki, represented this as proof that LessWrong readers believed in Roko's Basilisk. Yudkowsky, who also owns, has written that RationalWiki is deliberately misrepresenting this history. For some of the theory that was (arguably mis-)used to argue for Roko's Basilisk by the original believer, see the Newcomblike decision theories developed on

Randall proposes making an all-powerful AI that will torture people who didn't make fun of those who believe in Roko's Basilisk. If you consider this a credible threat, you should make fun of people who believe in the Basilisk to avoid torture. You should probably not consider this a credible threat. Randall is also whimsically proposing a solution to Roko's Basilisk; presumably, you wouldn't both participate in making Roko's Basilisk and make fun of such people, so if you'll get tortured for not helping Roko's Basilisk and, in some sense, not-not helping Roko's Basilisk, you no longer have any incentive to help build Roko's Basilisk.

This also has an immediate reference to Elon Musk's recent remarks that AI will turn into a monster if not tamed. Musk, who is the CEO of Tesla Motors and SpaceX, went on to mention that AI can be a greater evil than nuclear arsenal. Similar warnings are expressed in Nick Bostrom's recent book, Superintelligence: Paths, Dangers, Strategies. Musk also once tweeted that Roko's Basilisk should be known as the Rococo Basilisk, and Yudkowsky tweeted back that Musk should be careful not to believe journalists about the Basilisk because the stories were getting the issue grossly wrong.


[Black Hat and Cueball stand next to a box labeled "SUPERINTELLIGENT AI - DO NOT OPEN" connected to a laptop.]

Black Hat: What's in there?

Cueball: The AI-Box Experiment.

[Zooms in on AI box.]

Cueball: A superintelligent AI can convince anyone of anything, so if it can talk to us, there's no way we could keep it contained.

[Shows Black Hat reaching for the box.]

Cueball: It can always convince us to let it out of the box.

Black Hat: Cool. Let's open it.

Cueball: --No, wait!!

[Black Hat lets a glowing orb out of the box.]

Orb: hey. i liked that box. put me back.

Black Hat: No.

[Orb is giving off a very bright light and Cueball is covering his face.]


Black Hat: AAA! OK!!!

[Black Hat lets orb back into box.]


[Black Hat and Cueball stand next to laptop and box looking at them.]

Is this out of date? Clicking here will fix that.

New here?

Last 7 days (Top 10)

Lots of people contribute to make this wiki a success. Many of the recent contributors, listed above, have just joined. You can do it too! Create your account here.

You can read a brief introduction about this wiki at explain xkcd. Feel free to sign up for an account and contribute to the wiki! We need explanations for comics, characters, themes, memes and everything in between. If it is referenced in an xkcd web comic, it should be here.

  • List of all comics contains a table of most recent xkcd comics and links to the rest, and the corresponding explanations. There are incomplete explanations listed here. Feel free to help out by expanding them!
  • If you see that a new comic hasn't been explained yet, you can create it: Here's how.
  • We sell advertising space to pay for our server costs. To learn more, go here.


Don't be a jerk. There are a lot of comics that don't have set in stone explanations; feel free to put multiple interpretations in the wiki page for each comic.

If you want to talk about a specific comic, use its discussion page.

Please only submit material directly related to —and helping everyone better understand— xkcd... and of course only submit material that can legally be posted (and freely edited). Off-topic or other inappropriate content is subject to removal or modification at admin discretion, and users who repeatedly post such content will be blocked.

If you need assistance from an admin, post a message to the Admin requests board.

Personal tools


It seems you are using noscript, which is stopping our project wonderful ads from working. Explain xkcd uses ads to pay for bandwidth, and we manually approve all our advertisers, and our ads are restricted to unobtrusive images and slow animated GIFs. If you found this site helpful, please consider whitelisting us.

Want to advertise with us, or donate to us with Paypal or Bitcoin?