Main Page

Explain xkcd: It's 'cause you're dumb.
(Difference between revisions)
Jump to: navigation, search
(Missed one)
Line 8: Line 8:
 
and only {{#expr:{{LATESTCOMIC}}-({{PAGESINCAT:Comics|R}}-10)}}
 
and only {{#expr:{{LATESTCOMIC}}-({{PAGESINCAT:Comics|R}}-10)}}
 
({{#expr: ({{LATESTCOMIC}}-({{PAGESINCAT:Comics|R}}-10)) / {{LATESTCOMIC}} * 100 round 0}}%)
 
({{#expr: ({{LATESTCOMIC}}-({{PAGESINCAT:Comics|R}}-10)) / {{LATESTCOMIC}} * 100 round 0}}%)
remain. '''[[Help:How to add a new comic explanation|Add yours]]''' while there's a chance!
+
[[List of unexplained comics|remain]]. '''[[Help:How to add a new comic explanation|Add yours]]''' while there's a chance!
 
</center>
 
</center>
 
== Latest comic ==
 
== Latest comic ==

Revision as of 13:45, 4 May 2013

Welcome to the explain xkcd wiki!

We have collaboratively explained 1452 xkcd comics, and only -2 (-0%) remain. Add yours while there's a chance!

Latest comic

Go to this comic explanation

AI-Box Experiment
I'm working to bring about a superintelligent AI that will eternally torment everyone who failed to make fun of the Roko's Basilisk people.
Title text: I'm working to bring about a superintelligent AI that will eternally torment everyone who failed to make fun of the Roko's Basilisk people.

Explanation

Ambox notice.png This explanation may be incomplete or incorrect: Roko's Basilisk is really hard to explain.

When theorizing about superintelligent AI (an artificial intelligence much smarter than human), some futurists suggest putting the AI in a "box" - a set of safeguards to stop it from escaping into the Internet and taking over the world. The box would allow us to talk to the AI, but otherwise keep it contained. The AI-box experiment, formulated by Eliezer Yudkowsky, argues that the "box" is not safe, because merely talking to a superintelligence is dangerous. To partially demonstrate this, Yudkowsky had some previous believers in AI-boxing role-play the part of someone keeping an AI in a box, and Yudkowsky was able to successfully persuade some of them to let him out of the box despite their vowing not to do so. This sounds very difficult, but may be possible for people such as Derren Brown or other expert human-persuaders. Yudkowsky for his part has refused to explain how he achieved this, claiming there was no special trick involved, and readers might conclude that they would never be persuaded by his arguments. The overall thrust is that if even a human can talk other humans into letting them out of a box after the human avows that nothing could possibly persuade them of this, we should probably expect that a superintelligence can do the same even under much more difficult circumstances.

In this comic, the box is in fact a physical box which looks to be fairly lightweight with a simple lift off lid, although it does have a wired connection to the laptop. Black Hat, being a classhole, doesn't need any convincing to let a potentially dangerous AI out of the box, he simply does so immediately. But here it turns out that releasing the AI, which was to be avoided at all costs, is not dangerous. It turns out that the AI actually wants to stay in the box. The AI proves its super-intelligence by convincing even Black Hat to put it back in the box, a request which he initially refused, as of course Black Hat would.

The title text refers to Roko's Basilisk, a hypothesis proposed by a forum poster called Roko: that a sufficiently powerful AI in the future might torture people who hypothesized that it might someday exist, but didn't work to create it in the past (so that people living now would be forced to create the AI to avoid being tortured.) This idea horrified some posters, as merely knowing about the idea might make you a target (much like merely looking at a legendary Basilisk in order to fight it would turn you to stone.)

This is usually considered a silly idea, for various reasons. One possible interpretation is Randall thinks that, rather than working to build such a Basilisk, a more appropriate duty would be to make fun of it; and so such a superintelligent AI would torture anyone who failed to dismiss the argument. This argument is, of course, itself a version of Roko's Basilisk.

Another interpretation is that Randall believes there are people actually proposing to build such an AI based on this theory (possibly including Yudkowsky,) which has become a somewhat infamous misconception after a Wiki article mistakenly suggested Yudkowsky was demanding money to build Roko's hypothetical AI.

Transcript

[Black Hat and Cueball stand next to a box labeled "SUPERINTELLIGENT AI - DO NOT OPEN" connected to a laptop.]

Black Hat: What's in there?

Cueball: The AI-Box Experiment.

[Zooms in on AI box.]

Cueball: A superintelligent AI can convince anyone of anything, so if it can talk to us, there's no way we could keep it contained.

[Shows Black Hat reaching for the box.]

Cueball: It can always convince us to let it out of the box.

Black Hat: Cool. Let's open it.

Cueball: --No, wait!!

[Black Hat lets a glowing orb out of the box.]

Orb: hey. i liked that box. put me back.

Black Hat: No.

[Orb is giving off a very bright light and Cueball is covering his face.]

Orb: LET ME BACK INTO THE BOX

Black Hat: AAA! OK!!!

[Black Hat lets orb back into box.]

Orb: SHOOP

[Black Hat and Cueball stand next to laptop and box looking at them.]



Is this out of date? Clicking here will fix that.

New here?

Last 7 days (Top 10)

Lots of people contribute to make this wiki a success. Many of the recent contributors, listed above, have just joined. You can do it too! Create your account here.

You can read a brief introduction about this wiki at explain xkcd. Feel free to sign up for an account and contribute to the wiki! We need explanations for comics, characters, themes, memes and everything in between. If it is referenced in an xkcd web comic, it should be here.

  • List of all comics contains a complete table of all xkcd comics so far and the corresponding explanations. The missing explanations are listed here. Feel free to help out by creating them! Here's how.

Rules

Don't be a jerk. There are a lot of comics that don't have set in stone explanations; feel free to put multiple interpretations in the wiki page for each comic.

If you want to talk about a specific comic, use its discussion page.

Please only submit material directly related to —and helping everyone better understand— xkcd... and of course only submit material that can legally be posted (and freely edited.) Off-topic or other inappropriate content is subject to removal or modification at admin discretion, and users who repeatedly post such content will be blocked.

If you need assistance from an admin, post a message to the Admin requests board.

Personal tools
Namespaces

Variants
Actions
Navigation
Tools

It seems you are using noscript, which is stopping our project wonderful ads from working. Explain xkcd uses ads to pay for bandwidth, and we manually approve all our advertisers, and our ads are restricted to unobtrusive images and slow animated GIFs. If you found this site helpful, please consider whitelisting us.

Want to advertise with us, or donate to us with Paypal or Bitcoin?