<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
		<id>https://www.explainxkcd.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Kashim</id>
		<title>explain xkcd - User contributions [en]</title>
		<link rel="self" type="application/atom+xml" href="https://www.explainxkcd.com/wiki/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Kashim"/>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php/Special:Contributions/Kashim"/>
		<updated>2026-04-13T18:55:58Z</updated>
		<subtitle>User contributions</subtitle>
		<generator>MediaWiki 1.30.0</generator>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=2021:_Software_Development&amp;diff=160217</id>
		<title>2021: Software Development</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=2021:_Software_Development&amp;diff=160217"/>
				<updated>2018-07-18T18:26:55Z</updated>
		
		<summary type="html">&lt;p&gt;Kashim: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 2021&lt;br /&gt;
| date      = July 18, 2018&lt;br /&gt;
| title     = Software Development&lt;br /&gt;
| image     = software_development.png&lt;br /&gt;
| titletext = Update: It turns out the cannon has a motorized base, and can make holes just fine using the barrel itself as a battering ram. But due to design constraints it won't work without a projectile loaded in, so we still need those drills.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by an AUTOMATIC DRILL CANNON - Please change this comment when editing this page. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
Software development is often characterized by [[1513: Code Quality|graceless]] [[1695: Code Quality 2|solutions]] to rudimentary problems. [[Cueball]] has built an elegant drill (function) that can adjust torque and speed as necessary automatically to fulfill his requirement of 500 holes in the wall. [[Hairy]], in a categorically inelegant solution, loads 500 drills into a cannon and shoots them at the wall. This solution, in reality, would entail too many ludicrous safety problems to execute, but in software, the implications are only [[1833: Code Quality 3|really bad code]].&lt;br /&gt;
&lt;br /&gt;
The casual disregard for the software itself is reminiscent of the idea of [https://devops.stackexchange.com/questions/653/what-is-the-definition-of-cattle-not-pets cattle not pets] when deploying to servers.&lt;br /&gt;
&lt;br /&gt;
This is also reminiscent of assigning two different software teams to resolve 2 different parts of a problem, and then the process of having to make the tools that both of the teams made work together. The &amp;quot;Drill team&amp;quot; was given the task of making the part of the system that &amp;quot;makes a hole in the wall&amp;quot;. The &amp;quot;Cannon Team&amp;quot; was given the task of making the part of the system that &amp;quot;Aims what the 'Drill team' produces at the designated place on the wall, and then starts the hole making process.&amp;quot; The Drill team assumed that the aiming device would merely place their portion in place, allowing it to make the hole, but the Cannon team could not make assumptions about how the Drill team would generate holes, so they needed to make something that could use whatever the Drill team produced to make the holes. It breeds an attitude of, &amp;quot;We don't know what they are going to make, but we know that if we fire it out of a cannon, it will definitely make a hole in the wall.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
The title text is a joke about how often in software the best solution to a problem is a general one, rather than a specific one. See for example developers using Ruby on Rails (a full web framework with support for emails, templating, and web sockets) for a simple API-only service. They only need a very small part of rails (the hole drilling part), but end up with the whole framework anyway due to design limitations.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
{{incomplete transcript|Do NOT delete this tag too soon.}}&lt;br /&gt;
:[Cueball and Hairy are standing together and Hairy holds a power tool in his hands.]&lt;br /&gt;
:Cueball: We need to make 500 holes in that wall, so I've built this automatic drill. It uses elegant precision gears to continually adjust its torque and speed as needed.&lt;br /&gt;
:Hairy: Great, it's the perfect weight! We'll load 500 of them into the cannon we made and shoot them at the wall.&lt;br /&gt;
&lt;br /&gt;
:[Caption below the frame:]&lt;br /&gt;
:How software development works&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
[[Category:Comics featuring Cueball]]&lt;br /&gt;
[[Category:Comics featuring Hairy]]&lt;br /&gt;
[[Category:Programming]]&lt;/div&gt;</summary>
		<author><name>Kashim</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=Talk:1934:_Phone_Security&amp;diff=149876</id>
		<title>Talk:1934: Phone Security</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=Talk:1934:_Phone_Security&amp;diff=149876"/>
				<updated>2017-12-28T14:15:11Z</updated>
		
		<summary type="html">&lt;p&gt;Kashim: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--Please sign your posts with ~~~~ and don't delete this text. New comments should be added at the bottom.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;quot;Detonated&amp;quot; ah, so this is the feature that Samsung was prototyping last year... [[User:Andyd273|Andyd273]] ([[User talk:Andyd273|talk]]) 15:39, 27 December 2017 (UTC)&lt;br /&gt;
    &lt;br /&gt;
:Ha! Yes, it's too bad their phones kept mistakenly registering as being stolen... stupid bugs. [[Special:Contributions/172.69.70.107|172.69.70.107]] 17:28, 27 December 2017 (UTC) Sam&lt;br /&gt;
&lt;br /&gt;
:Back in the day if a hacker really hated you, you'd come back to your computer and see smoke pouring out of the CPU.  I bet there's some way to detonate a phone in software by overheating the battery, but I imagine it could be different for every phone/battery combination.&lt;br /&gt;
&lt;br /&gt;
Someone needs to make a jailbreak that does as much of this as possible, especially the ridesharing and siren 😂 [[User:PotatoGod|PotatoGod]] ([[User talk:PotatoGod|talk]]) 15:52, 27 December 2017 (UTC)&lt;br /&gt;
&lt;br /&gt;
: there was or is at least one house in the U.S. that was reported, apparently inaccurately, as the location of an extraordinary number of stolen cell phones.  Presumably that house would suffer all of the pranks that this phone security performs.  As for payment details - someone who stole a phone may have also stolen banking cards, so, the account number that you steal back may belong to another innocent victim.  It's just a joke of course, but, saying.  [[Special:Contributions/162.158.111.235|162.158.111.235]] 22:02, 27 December 2017 (UTC)&lt;br /&gt;
&lt;br /&gt;
Made a account just to ask this - why is the post still considered incomplete? It looks complete to me. [[User:Donutman|Donutman]] ([[User talk:Donutman|talk]]) 13:59, 28 December 2017 (UTC)&lt;br /&gt;
&lt;br /&gt;
: Because I added the core explanation, but many improvements have been made since then (bullet points, bolding, transcript). Also, the siren would be insanely easy to do, as would an automated &amp;quot;send the GPS location to the police&amp;quot; among other things. [[User:Kashim|Kashim]] ([[User talk:Kashim|talk]]) 14:15, 28 December 2017 (UTC)&lt;/div&gt;</summary>
		<author><name>Kashim</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=1934:_Phone_Security&amp;diff=149842</id>
		<title>1934: Phone Security</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=1934:_Phone_Security&amp;diff=149842"/>
				<updated>2017-12-27T15:42:10Z</updated>
		
		<summary type="html">&lt;p&gt;Kashim: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 1934&lt;br /&gt;
| date      = December 27, 2017&lt;br /&gt;
| title     = Phone Security&lt;br /&gt;
| image     = phone_security.png&lt;br /&gt;
| titletext = ...wait until they type in payment information, then use it to order yourself a replacement phone.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic pokes fun at various phone security measures. At first, it covers some real measures, and then continues on to measures that are clearly somewhat overzealous or otherwise humorous. It is worth noting that all of the options are turned ON in the screen shown, so apparently the owner must be very afraid that their phone is going to be stolen, or just wants to see what will happen.&lt;br /&gt;
These may be options that would appear on the [[:Category:xkcd Phones|XKCD Phone]], but that is not mentioned specifically, and this comic does not appear to be directly linked.&lt;br /&gt;
&lt;br /&gt;
The first two options: Set Passcode to Unlock, and Erase phone after 10 failed unlock attempts are both real security measures found on most phones.&lt;br /&gt;
The additional options:  &lt;br /&gt;
&lt;br /&gt;
If phone is stolen it may be:  &lt;br /&gt;
* Tracked: This would be reasonable, as it would allow the police to catch the perpetrator and return your phone.  &lt;br /&gt;
* Erased: This would also be reasonable, as it would prevent any sensitive data from being taken by a thief.  &lt;br /&gt;
* Detonated: This would be less reasonable, as it would likely harm the thief, possibly severely depending on how the phone was detonated.&lt;br /&gt;
&lt;br /&gt;
If the phone is stolen, play an earsplitting siren until the battery dies or is removed: This would be to draw attention to the thief, and discourage them from stealing future phones.  &lt;br /&gt;
&lt;br /&gt;
If the phone is stolen, do a fake factory reset. Then, in the background... :This series of options is all humorous, indicating that the phone would allow the thief to think that it had factory reset, but the phone would, in fact, not do so, and would instead foil the thief by doing various horrible things to them.&lt;br /&gt;
&lt;br /&gt;
* Constantly Request Dozens of Simultaneous Rideshares to the Phone's Location: This would cause tons of &amp;quot;rides&amp;quot; to show up at the stolen phone, leaving a lot of annoyed ridesharers, and possibly alerting the police to the thief's location.  &lt;br /&gt;
* Automatically order food to the Phone's location from every delivery place within 20 miles: This would be similar to the ridesharing issue, except it would be implied that the thief would be on the hook to pay for all of that delivered food. This could also lead the police to the thief.&lt;br /&gt;
* If the thief logs into Facebook, send hostile messages to all their family members: This has now deviated from things that could even possibly be useful, and is now just getting revenge on the thief, or potentially the person that the thief sells the phone to.&lt;br /&gt;
*Automatically direct self driving car to drive toward the phone's location at 5mph: This would cause a self driving car to slowly follow the thief. This could absolutely catch the thief, but would also just be really, really creepy. This is similar to the plot of the movie &amp;quot;{{w|It Follows}}&amp;quot;.&lt;br /&gt;
*Take photos of random objects at the thief's address and post them as &amp;quot;Free&amp;quot; on Craigslist or NextDoor: Craigslist and NextDoor are sites that allow people to post advertisements for various things. Posting a large number of things for free would cause a lot of people to show up at the thief's residence (though it is not noted how the phone would know where the thief resides) requesting the free things, or, more humorously, if the thief was not home, people may just come by and take things, causing them to steal from the thief. This would be a humorous form of poetic justice.&lt;br /&gt;
&lt;br /&gt;
The title text extends the last category with: Wait until they type in payment information, then use it to order yourself a new phone. This would be the ultimate in poetic justice, as it would basically say that the user does not care if their phone gets stolen, because the thief will end up unintentionally buying them a new one. If the thief were to complain about this, they would have to admit that they had stolen the first phone in order to do so, which they would be disinclined to do.&lt;br /&gt;
&lt;br /&gt;
All these options are toggled on.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
&lt;br /&gt;
Security Options&lt;br /&gt;
&lt;br /&gt;
* Passcode to unlock [Set Code]&lt;br /&gt;
* Erase phone after ten failed unlock attepts&lt;br /&gt;
&lt;br /&gt;
If stolen, phone can be remotely&lt;br /&gt;
&lt;br /&gt;
* Tracked&lt;br /&gt;
* Erased&lt;br /&gt;
* Detonated&lt;br /&gt;
&lt;br /&gt;
* If phone is stolen, erase data and play an earsplitting siren until the battery dies or is removed.&lt;br /&gt;
&lt;br /&gt;
If phone is stolen do a fake factory reset. Then, in the background...&lt;br /&gt;
&lt;br /&gt;
* ...constantly request dozens of simultaneous rideshares to the phones location. &lt;br /&gt;
* ...automatically order food to phones location from every delivery place  within 20 miles.&lt;br /&gt;
* ...if thief logs in to Facebook. send hostile messages to all thief family members.&lt;br /&gt;
* ...automatically direct self-driving car to drive toward phone's location at 5 mph.&lt;br /&gt;
* ...take photos of random objects at the fiefs address and post them as &amp;quot;free&amp;quot; on Craigslist and Nextdoor.&lt;br /&gt;
&lt;br /&gt;
Title Text: ...wait until they type in payment information, then use it to order yourself a replacement phone.&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;/div&gt;</summary>
		<author><name>Kashim</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=1934:_Phone_Security&amp;diff=149834</id>
		<title>1934: Phone Security</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=1934:_Phone_Security&amp;diff=149834"/>
				<updated>2017-12-27T15:03:29Z</updated>
		
		<summary type="html">&lt;p&gt;Kashim: Added an initial explanation.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 1934&lt;br /&gt;
| date      = December 27, 2017&lt;br /&gt;
| title     = Phone Security&lt;br /&gt;
| image     = phone_security.png&lt;br /&gt;
| titletext = ...wait until they type in payment information, then use it to order yourself a replacement phone.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Created by a Man whose Phone was stolen - Please change this comment when editing this page. Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
This comic pokes fun at various phone security measures. At first, it covers some real measures, and then continues on to measures that are clearly somewhat overzealous or otherwise humorous. It is worth noting that all of the options are turned ON in the screen shown, so apparently the owner must be very afraid that their phone is going to be stolen, or just wants to see what will happen.&lt;br /&gt;
These may be options that would appear on the XKCD phone, but that is not mentioned specifically, and this comic does not appear to be directly linked.&lt;br /&gt;
&lt;br /&gt;
The first two options: Set Passcode to Unlock, and Erase phone after 10 failed unlock attempts are both real security measures found on most phones.&lt;br /&gt;
The additional options: &lt;br /&gt;
If phone is stolen it may be:&lt;br /&gt;
    Tracked: This would be reasonable, as it would allow the police to catch the perpetrator and return your phone.&lt;br /&gt;
    Erased: This would also be reasonable, as it would prevent any sensitive data from being taken by a thief.&lt;br /&gt;
    Detonated: This would be less reasonable, as it would likely harm the thief, possibly severely depending on how the phone was detonated.&lt;br /&gt;
If the phone is stolen, play an earsplitting siren until the battery dies or is removed: This would be to draw attention to the thief, and discourage them from stealing future phones.&lt;br /&gt;
If the phone is stolen, do a fake factory reset. Then, in the background... :This series of options is all humorous, indicating that the phone would allow the thief to think that it had factory reset, but the phone would, in fact, not do so, and would instead foil the thief by doing various horrible things to them.&lt;br /&gt;
    Constantly Request Dozens of Simultaneous Rideshares to the Phone's Location: This would cause tons of &amp;quot;rides&amp;quot; to show up at the stolen phone, leaving a lot of annoyed ridesharers, and possibly alerting the police to the thief's location.&lt;br /&gt;
    Automatically order food to the Phone's location from every delivery place within 20 miles: This would be similar to the ridesharing issue, except it would be implied that the thief would be on the hook to pay for all of that delivered food. This could also lead the police to the thief.&lt;br /&gt;
    If the thief logs into Facebook, send hostile messages to all their family members: This has now deviated from things that could even possibly be useful, and is now just getting revenge on the thief, or potentially the person that the thief sells the phone to.&lt;br /&gt;
    Automatically direct self driving car to drive toward the phone's location at 5mph: This would cause a self driving car to slowly follow the thief. This could absolutely catch the thief, but would also just be really, really creepy.&lt;br /&gt;
    Take photos of random objects at the thief's address and post them as &amp;quot;Free&amp;quot; on Craigslist or NextDoor: Craigslist and NextDoor are sites that allow people to post advertisements for various things. Posting a large number of things for free would cause a lot of people to show up at the thief's residence (though it is not noted how the phone would know where the thief resides) requesting the free things, or, more humorously, if the thief was not home, people may just come by and take things, causing them to steal from the thief. This would be a humorous form of poetic justice.&lt;br /&gt;
&lt;br /&gt;
The title text extends the last category with: Wait until they type in payment information, then use it to order yourself a new phone. This would be the ultimate in poetic justice, as it would basically say that the user does not care if their phone gets stolen, because the thief will end up unintentionally buying them a new one. If the thief were to complain about this, they would have to admit that they had stolen the first phone in order to do so, which they would be disinclined to do.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
{{incomplete transcript|Do NOT delete this tag too soon.}}&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;/div&gt;</summary>
		<author><name>Kashim</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=Talk:1902:_State_Borders&amp;diff=146596</id>
		<title>Talk:1902: State Borders</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=Talk:1902:_State_Borders&amp;diff=146596"/>
				<updated>2017-10-13T17:03:04Z</updated>
		
		<summary type="html">&lt;p&gt;Kashim: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--Please sign your posts with ~~~~ and don't delete this text. New comments should be added at the bottom.--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Let's be honest- it should ''all'' be Canada. [[Special:Contributions/162.158.74.123|162.158.74.123]] 12:24, 13 October 2017 (UTC)&lt;br /&gt;
&lt;br /&gt;
Could Arizona, New Mexico be a reference to Trump? Like, make the border straighter so it's easier to build a wall? [[User:Herobrine|Herobrine]] ([[User talk:Herobrine|talk]]) 12:35, 13 October 2017 (UTC)&lt;br /&gt;
:More likely the joke is that conceding territory to Mexico is about the last thing Trump would do [[User:AnotherAnonymous|AnotherAnonymous]] ([[User talk:AnotherAnonymous|talk]]) 13:04, 13 October 2017 (UTC)&lt;br /&gt;
&lt;br /&gt;
My first thought is to wonder if it would be possible to arrange the map such that all internal borders are &amp;quot;straight lines&amp;quot; that span the entire country, to satisfy as many criteria as possible:&lt;br /&gt;
* The number of states remains unchanged&lt;br /&gt;
** …and they all get to keep their capitals (probably quite difficult)&lt;br /&gt;
*** …or (and?) each state manages to keep either its current population, land area, or coastline length&lt;br /&gt;
* Or all internal borders are parallels or meridians&lt;br /&gt;
* Or all states have the same land area&lt;br /&gt;
** …or population; or population density&lt;br /&gt;
* Or if you're allowing more (or fewer) states than the present layout, what's the greatest number of states possible such that they all contain at least one complete city?&lt;br /&gt;
&lt;br /&gt;
Which of those criteria would be the most interesting challenge? And which could you construct an algorithm to solve?&lt;br /&gt;
I really should refrain from trying to build those algorithms, because I'm supposed to be working --[[User:Angel|Angel]] ([[User talk:Angel|talk]]) 13:28, 13 October 2017 (UTC)&lt;br /&gt;
&lt;br /&gt;
There are some great videos on YouTube about weird State boundaries. There are some REALLY weird oddities out there. Take for instance the &amp;quot;Give to Canada&amp;quot; piece - that's the Northwest Angle in Minnesota. It's really an accident that it ever ended up in the USA at all, and doesn't make any sense! [[User:Martini|Martini]] ([[User talk:Martini|talk]]) 13:40, 13 October 2017 (UTC)Martini&lt;br /&gt;
:I wouldn't call the NW Angle an accident as much as a slightly illogical solution in order to maintain the terms of the original border agreement in the face of the Mississippi River's inconveniently located headwaters. My recollection is that it said roughly: the border goes west of &amp;lt;this&amp;gt; point until reaching the Mississippi river [which all parties assumed continued that far north]. [[Special:Contributions/108.162.216.40|108.162.216.40]] 14:13, 13 October 2017 (UTC)&lt;br /&gt;
&lt;br /&gt;
I believe Randall's overall point is that though a large part of the individual United States have straight boundaries, especially in the West, or other features that are aesthetically pleasing, as in the S Carolina/Georgia/Florida coastline, there are a good number of internal inconsistencies. Many of these (most of the untagged &amp;quot;fixes&amp;quot;) can be attributed to the concept that &amp;quot;Rivers make good logical boundaries&amp;quot;, but even then, if you look closer, there are some really puzzling bits: &lt;br /&gt;
* The &amp;quot;Give To Canada&amp;quot; bit of Minnesota is almost all Indian Reservation land, so that kind of makes sense...&lt;br /&gt;
* The &amp;quot;Fix this thing&amp;quot; in Missouri is even stranger than it initially looks - while the notch in Arkansas is caused by the Mississippi River, there is a large bight of land in the middle of the Missouri-owned bit that is actually Kentucky (yes, there's an island of Kentucky that is separate from the main Kentucky state and entirely surrounded by Missouri)&lt;br /&gt;
* Not edited, but equally odd is the dip Florida cuts into Georgia near the east coast - there's no apparent town or natural features there to cause that irregularity &lt;br /&gt;
&lt;br /&gt;
I don't happen to think the Arizona/New Mexico bits are political commentary, just &amp;quot;the entire rest of the state is a box, make this a straight line, too.&amp;quot; cleanup. I mean yes, it would make wall-building easier, theoretically, but the Chinese showed the world centuries ago that straight lines are not needed to build a big fricking wall. [[Special:Contributions/108.162.238.131|108.162.238.131]] 14:23, 13 October 2017 (UTC)&lt;br /&gt;
&lt;br /&gt;
I'm surprised Randall didn't suggest cleaning up Point Roberts as well [https://en.wikipedia.org/wiki/Point_Roberts,_Washington]. [[Special:Contributions/141.101.107.174|141.101.107.174]] 14:33, 13 October 2017 (UTC)&lt;br /&gt;
&lt;br /&gt;
I'm shocked he didn't support fixing the Idaho/Wisconsin/Montana/Oregon border. That top part should be either given to Montana, or split between Washington and Oregon... I wonder if he left out certain things in order to avoid offending certain groups of people. Like suggesting that Rhode Island and Connecticut should probably be one state, or that Vermont and New Hampshire should be as well.  [[User:Kashim|Kashim]] ([[User talk:Kashim|talk]]) 17:03, 13 October 2017 (UTC)&lt;/div&gt;</summary>
		<author><name>Kashim</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=Talk:1772:_Startup_Opportunity&amp;diff=132531</id>
		<title>Talk:1772: Startup Opportunity</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=Talk:1772:_Startup_Opportunity&amp;diff=132531"/>
				<updated>2016-12-14T21:34:06Z</updated>
		
		<summary type="html">&lt;p&gt;Kashim: added a comment&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--Please sign your posts with ~~~~--&amp;gt;&lt;br /&gt;
More escapades of Beret guy's business - [[1021]], [[1032]], and probably more --[[User:AnotherAnonymous|AnotherAnonymous]] ([[User talk:AnotherAnonymous|talk]]) 15:41, 14 December 2016 (UTC)&lt;br /&gt;
&lt;br /&gt;
it may be a reference a episode of the Adult swim show Rick and Morty. In season 1 episode 9 &amp;quot;Something Ricked This Way Comes&amp;quot; the devil sets up a shop that gives away magical items that appear to give the user some superpower or other advantage but turn out to be cursed, for example a type writer that helps the user make best selling murder mystery books but then the murders happen to them in real life. Rick decides to open his own business to un-curse items but letting them keep there magic power thus disrupting the devils entire business.&lt;br /&gt;
&lt;br /&gt;
== Online virtual world ==&lt;br /&gt;
&lt;br /&gt;
I think this comic could be referring to online virtual world. There is several site that sell virtual good for real money.  Players could also trade virtual currency for virtual magic item.  The fact the shop is in virtual world could explain why they look like they never existed.&lt;br /&gt;
&lt;br /&gt;
Temporary shops that sell items to adventurers in need are a common theme among many games. O'aka XXIII in FFX is the first one that comes to mind, but there are a LOT. A lot of these shops sell items that are of particular value at the time, but another common theme among them is to sell unidentified or even cursed items, admonishing the player for trusting some random guy that they met in the wilderness. Sometimes these &amp;quot;cursed&amp;quot; items end up being plot essential. The really crooked ones also offer to uncurse the items once they are identified (or the user has identified that they are cursed by equipping them before they are fully identified) Mordor: the depths of Dejenol is an old game that had cursed items that you had to pay the shop to have removed before you could level up. Some of the items, though, were &amp;quot;cursed&amp;quot; but provided real benefits, and players would equip them intentionally every level knowing that they'd have to pay because the benefit was great enough. [[User:Kashim|Kashim]] ([[User talk:Kashim|talk]]) 21:34, 14 December 2016 (UTC)&lt;br /&gt;
&lt;br /&gt;
--[[Special:Contributions/108.162.219.94|108.162.219.94]] 18:12, 14 December 2016 (UTC)&lt;/div&gt;</summary>
		<author><name>Kashim</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=Talk:1772:_Startup_Opportunity&amp;diff=132530</id>
		<title>Talk:1772: Startup Opportunity</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=Talk:1772:_Startup_Opportunity&amp;diff=132530"/>
				<updated>2016-12-14T21:33:28Z</updated>
		
		<summary type="html">&lt;p&gt;Kashim: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&amp;lt;!--Please sign your posts with ~~~~--&amp;gt;&lt;br /&gt;
More escapades of Beret guy's business - [[1021]], [[1032]], and probably more --[[User:AnotherAnonymous|AnotherAnonymous]] ([[User talk:AnotherAnonymous|talk]]) 15:41, 14 December 2016 (UTC)&lt;br /&gt;
&lt;br /&gt;
it may be a reference a episode of the Adult swim show Rick and Morty. In season 1 episode 9 &amp;quot;Something Ricked This Way Comes&amp;quot; the devil sets up a shop that gives away magical items that appear to give the user some superpower or other advantage but turn out to be cursed, for example a type writer that helps the user make best selling murder mystery books but then the murders happen to them in real life. Rick decides to open his own business to un-curse items but letting them keep there magic power thus disrupting the devils entire business.&lt;br /&gt;
&lt;br /&gt;
== Online virtual world ==&lt;br /&gt;
&lt;br /&gt;
I think this comic could be referring to online virtual world. There is several site that sell virtual good for real money.  Players could also trade virtual currency for virtual magic item.  The fact the shop is in virtual world could explain why they look like they never existed.&lt;br /&gt;
&lt;br /&gt;
Temporary shops that sell items to adventurers in need are a common theme among many games. O'aka XXIII in FFX is the first one that comes to mind, but there are a LOT. A lot of these shops sell items that are of particular value at the time, but another common theme among them is to sell unidentified or even cursed items, admonishing the player for trusting some random guy that they met in the wilderness. Sometimes these &amp;quot;cursed&amp;quot; items end up being plot essential. The really crooked ones also offer to uncurse the items once they are identified (or the user has identified that they are cursed by equipping them before they are fully identified) Mordor: the depths of Dejenol is an old game that had cursed items that you had to pay the shop to have removed before you could level up. Some of the items, though, were &amp;quot;cursed&amp;quot; but provided real benefits, and players would equip them intentionally every level knowing that they'd have to pay because the benefit was great enough. [[User:Kashim|Kashim]] ([[User talk:Kashim|talk]]) 21:33, 14 December 2016 (UTC)&lt;br /&gt;
&lt;br /&gt;
--[[Special:Contributions/108.162.219.94|108.162.219.94]] 18:12, 14 December 2016 (UTC)&lt;/div&gt;</summary>
		<author><name>Kashim</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=Talk:1619:_Watson_Medical_Algorithm&amp;diff=107479</id>
		<title>Talk:1619: Watson Medical Algorithm</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=Talk:1619:_Watson_Medical_Algorithm&amp;diff=107479"/>
				<updated>2015-12-21T20:11:54Z</updated>
		
		<summary type="html">&lt;p&gt;Kashim: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Build environment is still insane since comic #371. {{unsigned ip|162.158.2.139}}&lt;br /&gt;
&lt;br /&gt;
(Above poster please sign comments with four tildes)&lt;br /&gt;
&lt;br /&gt;
I'm trying to picture Baymax using this algorithm. {{unsigned|International Space Station}}&lt;br /&gt;
&lt;br /&gt;
:&amp;quot;OK, who swapped out Baymax's programming card with a Doomba AI?&amp;quot; [[User:VectorLightning|VectorLightning]] ([[User talk:VectorLightning|talk]]) 08:02, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
Well, at least the autoconfig isn't as threatening as #416.&lt;br /&gt;
[[Special:Contributions/108.162.245.179|108.162.245.179]] 07:00, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
I suspect that the extra limbs should be removed when there are 100+ and Vitamin D levels checked when the nmbr of limbs is in an acceptable range... does IBM use a ticketing system? [[Special:Contributions/162.158.91.194|162.158.91.194]] 08:39, 21 December 2015 (UTC)&lt;br /&gt;
: Unfortunately the algorithm as shown in the cartoon has the conditions for those two steps exactly the other way around, making even less sense medically. --[[User:Svenman|Svenman]] ([[User talk:Svenman|talk]]) 14:30, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
It seems a normal patient would end up mostly unscathed and in an infinite loop in the lower right corner. [[User:Benjaminikuta|Benjaminikuta]] ([[User talk:Benjaminikuta|talk]]) 09:01, 21 December 2015 (UTC)&lt;br /&gt;
:Uhm no. You would normally have an oxygen level above 50% of what is expected. (It should be close to 100% if I understand [http://www.nonin.com/Normal-Oxygen-Level this correctly], which I may not...). This means you have had your skeleton removed. If you survived this you are squeezed until fluid comes out. (Probably not necessary after the skelerectomy). But then you end up in the lower right corner. Of course you can also get there after just getting an oxygen injection, but only directly if you are not comforted when the program tries. If you where comforted you will lose some limbs. And then end up in the lower right corner. No matter what if you are still OK (could be possible) when reaching here, you will be asked about your pain level. And even if you start by saying 0-8 many many times, getting as many scalp massages, you will just get the same question, until you say 10 then your eyes will be removed. But no matter what, if you are asked such a stupid question enough times you will surely at some point say something else than 0-10, and then you will die, as this answer will take you down the last path of the program (and only exit of the cycle according the to glitch mentioned in the title text), and this will end up with the program performing an autopsy on you, thus cutting you up and removing all organs etc. So no you will not be able to go unscathed infinitely, and even if you kept saying 0-8 you would eventually die from thirst. ;) --[[User:Kynde|Kynde]] ([[User talk:Kynde|talk]]) 09:24, 21 December 2015 (UTC)&lt;br /&gt;
::Yes, normal oxygen saturation is 98-100% in air.  If it drops below 95% you will be in trouble, if it drops below 85% you're likely dead. [[User:Kev|Kev]] ([[User talk:Kev|talk]]) 09:54, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
This might also partially be a reference to machine learning, which Watson apparently uses: badly designed ML systems often build models which produce the expected results for the training data, but do something unexpected or wrong with real data. See [https://en.wikipedia.org/wiki/Overfitting#Machine_learning]. That said ... 'dissect doctor for parts' doesn't seem like a reasonable response to any training input ;) [[Special:Contributions/162.158.39.208|162.158.39.208]] 10:41, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
The noted &amp;quot;unrelated actions&amp;quot; aren't all entirely unrelated. The coughing blood one is interpreting backwards (so &amp;quot;is patient not coughing up blood because the patient is not here to do so?&amp;quot;), the vitamin D one is somewhat logical (vit D is part of the chain that converts calcium to bone, low vit D can cause bone loss, but high vit D is basically harmless), and the green fluid is slightly sane but too vague (logic appears to be that green fluid indicates severely infected and/or necrotic tissue, for which cauterizing might be a valid treatment step in extreme situations).  Weirdly specific might be a better header? [[Special:Contributions/141.101.106.197|141.101.106.197]] 11:57, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
So what happens if the skeleton has exactly the right number of bones? --[[Special:Contributions/162.158.153.71|162.158.153.71]] 12:32, 21 December 2015 (UTC)&lt;br /&gt;
: Indeed this case is not covered, thus making the algorithm faulty even on an abstract logical level. --[[User:Svenman|Svenman]] ([[User talk:Svenman|talk]]) 14:33, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
the Request organ donation/Remove organs part reminds me of Live Organ Transplants segment in ''{{w|Monty Python's The Meaning of Life}}''. --[[User:Valepert|valepert]] ([[User talk:Valepert|talk]]) 12:53, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
100 could be a reference to 4 in binary (4+ limbs / less than 4 limbs) [[Special:Contributions/141.101.99.39|141.101.99.39]] 12:59, 21 December 2015 (UTC)&lt;br /&gt;
:I believe you're correct. [[User:Mikemk|Mikemk]] ([[User talk:Mikemk|talk]]) 15:17, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
I think GlaDOS is a descendent from this Watson. [[User:Mikemk|Mikemk]] ([[User talk:Mikemk|talk]]) 15:17, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The part about squeezing and looking for the color of the ooze seems to reference Humorism. The colors match the four humors. [[Special:Contributions/162.158.91.188|162.158.91.188]] 15:31, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
I'm surprised he didn't make a Dr Watson joke/reference.--[[User:R0hrshach|R0hrshach]] ([[User talk:R0hrshach|talk]]) 17:33, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
This algorithm certainly does not exit without the death of the patient, however, such a death can result from old age as long as the patient can make it to the bottom right infinite loop and continuously reports a number from 0-9 for pain. It IS possible to make it to that loop alive. Extremely low blood oxygen levels have been recorded in healthy Everest climbers, but the article I read gave the results in kilopascals, not in % so I don't know how that converts. However, repeatedly reporting a pain level of 0-8 would result in continuous scalp massages, which may actually be considered pleasant. [[User:Kashim|Kashim]] ([[User talk:Kashim|talk]]) 20:11, 21 December 2015 (UTC)&lt;/div&gt;</summary>
		<author><name>Kashim</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=Talk:1619:_Watson_Medical_Algorithm&amp;diff=107478</id>
		<title>Talk:1619: Watson Medical Algorithm</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=Talk:1619:_Watson_Medical_Algorithm&amp;diff=107478"/>
				<updated>2015-12-21T20:10:48Z</updated>
		
		<summary type="html">&lt;p&gt;Kashim: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Build environment is still insane since comic #371. {{unsigned ip|162.158.2.139}}&lt;br /&gt;
&lt;br /&gt;
(Above poster please sign comments with four tildes)&lt;br /&gt;
&lt;br /&gt;
I'm trying to picture Baymax using this algorithm. {{unsigned|International Space Station}}&lt;br /&gt;
&lt;br /&gt;
:&amp;quot;OK, who swapped out Baymax's programming card with a Doomba AI?&amp;quot; [[User:VectorLightning|VectorLightning]] ([[User talk:VectorLightning|talk]]) 08:02, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
Well, at least the autoconfig isn't as threatening as #416.&lt;br /&gt;
[[Special:Contributions/108.162.245.179|108.162.245.179]] 07:00, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
I suspect that the extra limbs should be removed when there are 100+ and Vitamin D levels checked when the nmbr of limbs is in an acceptable range... does IBM use a ticketing system? [[Special:Contributions/162.158.91.194|162.158.91.194]] 08:39, 21 December 2015 (UTC)&lt;br /&gt;
: Unfortunately the algorithm as shown in the cartoon has the conditions for those two steps exactly the other way around, making even less sense medically. --[[User:Svenman|Svenman]] ([[User talk:Svenman|talk]]) 14:30, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
It seems a normal patient would end up mostly unscathed and in an infinite loop in the lower right corner. [[User:Benjaminikuta|Benjaminikuta]] ([[User talk:Benjaminikuta|talk]]) 09:01, 21 December 2015 (UTC)&lt;br /&gt;
:Uhm no. You would normally have an oxygen level above 50% of what is expected. (It should be close to 100% if I understand [http://www.nonin.com/Normal-Oxygen-Level this correctly], which I may not...). This means you have had your skeleton removed. If you survived this you are squeezed until fluid comes out. (Probably not necessary after the skelerectomy). But then you end up in the lower right corner. Of course you can also get there after just getting an oxygen injection, but only directly if you are not comforted when the program tries. If you where comforted you will lose some limbs. And then end up in the lower right corner. No matter what if you are still OK (could be possible) when reaching here, you will be asked about your pain level. And even if you start by saying 0-8 many many times, getting as many scalp massages, you will just get the same question, until you say 10 then your eyes will be removed. But no matter what, if you are asked such a stupid question enough times you will surely at some point say something else than 0-10, and then you will die, as this answer will take you down the last path of the program (and only exit of the cycle according the to glitch mentioned in the title text), and this will end up with the program performing an autopsy on you, thus cutting you up and removing all organs etc. So no you will not be able to go unscathed infinitely, and even if you kept saying 0-8 you would eventually die from thirst. ;) --[[User:Kynde|Kynde]] ([[User talk:Kynde|talk]]) 09:24, 21 December 2015 (UTC)&lt;br /&gt;
::Yes, normal oxygen saturation is 98-100% in air.  If it drops below 95% you will be in trouble, if it drops below 85% you're likely dead. [[User:Kev|Kev]] ([[User talk:Kev|talk]]) 09:54, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
This might also partially be a reference to machine learning, which Watson apparently uses: badly designed ML systems often build models which produce the expected results for the training data, but do something unexpected or wrong with real data. See [https://en.wikipedia.org/wiki/Overfitting#Machine_learning]. That said ... 'dissect doctor for parts' doesn't seem like a reasonable response to any training input ;) [[Special:Contributions/162.158.39.208|162.158.39.208]] 10:41, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
The noted &amp;quot;unrelated actions&amp;quot; aren't all entirely unrelated. The coughing blood one is interpreting backwards (so &amp;quot;is patient not coughing up blood because the patient is not here to do so?&amp;quot;), the vitamin D one is somewhat logical (vit D is part of the chain that converts calcium to bone, low vit D can cause bone loss, but high vit D is basically harmless), and the green fluid is slightly sane but too vague (logic appears to be that green fluid indicates severely infected and/or necrotic tissue, for which cauterizing might be a valid treatment step in extreme situations).  Weirdly specific might be a better header? [[Special:Contributions/141.101.106.197|141.101.106.197]] 11:57, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
So what happens if the skeleton has exactly the right number of bones? --[[Special:Contributions/162.158.153.71|162.158.153.71]] 12:32, 21 December 2015 (UTC)&lt;br /&gt;
: Indeed this case is not covered, thus making the algorithm faulty even on an abstract logical level. --[[User:Svenman|Svenman]] ([[User talk:Svenman|talk]]) 14:33, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
the Request organ donation/Remove organs part reminds me of Live Organ Transplants segment in ''{{w|Monty Python's The Meaning of Life}}''. --[[User:Valepert|valepert]] ([[User talk:Valepert|talk]]) 12:53, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
100 could be a reference to 4 in binary (4+ limbs / less than 4 limbs) [[Special:Contributions/141.101.99.39|141.101.99.39]] 12:59, 21 December 2015 (UTC)&lt;br /&gt;
:I believe you're correct. [[User:Mikemk|Mikemk]] ([[User talk:Mikemk|talk]]) 15:17, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
I think GlaDOS is a descendent from this Watson. [[User:Mikemk|Mikemk]] ([[User talk:Mikemk|talk]]) 15:17, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The part about squeezing and looking for the color of the ooze seems to reference Humorism. The colors match the four humors. [[Special:Contributions/162.158.91.188|162.158.91.188]] 15:31, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
I'm surprised he didn't make a Dr Watson joke/reference.--[[User:R0hrshach|R0hrshach]] ([[User talk:R0hrshach|talk]]) 17:33, 21 December 2015 (UTC)&lt;/div&gt;</summary>
		<author><name>Kashim</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=Talk:1619:_Watson_Medical_Algorithm&amp;diff=107477</id>
		<title>Talk:1619: Watson Medical Algorithm</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=Talk:1619:_Watson_Medical_Algorithm&amp;diff=107477"/>
				<updated>2015-12-21T20:09:54Z</updated>
		
		<summary type="html">&lt;p&gt;Kashim: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;Build environment is still insane since comic #371. {{unsigned ip|162.158.2.139}}&lt;br /&gt;
&lt;br /&gt;
(Above poster please sign comments with four tildes)&lt;br /&gt;
&lt;br /&gt;
I'm trying to picture Baymax using this algorithm. {{unsigned|International Space Station}}&lt;br /&gt;
&lt;br /&gt;
:&amp;quot;OK, who swapped out Baymax's programming card with a Doomba AI?&amp;quot; [[User:VectorLightning|VectorLightning]] ([[User talk:VectorLightning|talk]]) 08:02, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
Well, at least the autoconfig isn't as threatening as #416.&lt;br /&gt;
[[Special:Contributions/108.162.245.179|108.162.245.179]] 07:00, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
I suspect that the extra limbs should be removed when there are 100+ and Vitamin D levels checked when the nmbr of limbs is in an acceptable range... does IBM use a ticketing system? [[Special:Contributions/162.158.91.194|162.158.91.194]] 08:39, 21 December 2015 (UTC)&lt;br /&gt;
: Unfortunately the algorithm as shown in the cartoon has the conditions for those two steps exactly the other way around, making even less sense medically. --[[User:Svenman|Svenman]] ([[User talk:Svenman|talk]]) 14:30, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
It seems a normal patient would end up mostly unscathed and in an infinite loop in the lower right corner. [[User:Benjaminikuta|Benjaminikuta]] ([[User talk:Benjaminikuta|talk]]) 09:01, 21 December 2015 (UTC)&lt;br /&gt;
:Uhm no. You would normally have an oxygen level above 50% of what is expected. (It should be close to 100% if I understand [http://www.nonin.com/Normal-Oxygen-Level this correctly], which I may not...). This means you have had your skeleton removed. If you survived this you are squeezed until fluid comes out. (Probably not necessary after the skelerectomy). But then you end up in the lower right corner. Of course you can also get there after just getting an oxygen injection, but only directly if you are not comforted when the program tries. If you where comforted you will lose some limbs. And then end up in the lower right corner. No matter what if you are still OK (could be possible) when reaching here, you will be asked about your pain level. And even if you start by saying 0-8 many many times, getting as many scalp massages, you will just get the same question, until you say 10 then your eyes will be removed. But no matter what, if you are asked such a stupid question enough times you will surely at some point say something else than 0-10, and then you will die, as this answer will take you down the last path of the program (and only exit of the cycle according the to glitch mentioned in the title text), and this will end up with the program performing an autopsy on you, thus cutting you up and removing all organs etc. So no you will not be able to go unscathed infinitely, and even if you kept saying 0-8 you would eventually die from thirst. ;) --[[User:Kynde|Kynde]] ([[User talk:Kynde|talk]]) 09:24, 21 December 2015 (UTC)&lt;br /&gt;
::Yes, normal oxygen saturation is 98-100% in air.  If it drops below 95% you will be in trouble, if it drops below 85% you're likely dead. [[User:Kev|Kev]] ([[User talk:Kev|talk]]) 09:54, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
This might also partially be a reference to machine learning, which Watson apparently uses: badly designed ML systems often build models which produce the expected results for the training data, but do something unexpected or wrong with real data. See [https://en.wikipedia.org/wiki/Overfitting#Machine_learning]. That said ... 'dissect doctor for parts' doesn't seem like a reasonable response to any training input ;) [[Special:Contributions/162.158.39.208|162.158.39.208]] 10:41, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
The noted &amp;quot;unrelated actions&amp;quot; aren't all entirely unrelated. The coughing blood one is interpreting backwards (so &amp;quot;is patient not coughing up blood because the patient is not here to do so?&amp;quot;), the vitamin D one is somewhat logical (vit D is part of the chain that converts calcium to bone, low vit D can cause bone loss, but high vit D is basically harmless), and the green fluid is slightly sane but too vague (logic appears to be that green fluid indicates severely infected and/or necrotic tissue, for which cauterizing might be a valid treatment step in extreme situations).  Weirdly specific might be a better header? [[Special:Contributions/141.101.106.197|141.101.106.197]] 11:57, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
So what happens if the skeleton has exactly the right number of bones? --[[Special:Contributions/162.158.153.71|162.158.153.71]] 12:32, 21 December 2015 (UTC)&lt;br /&gt;
: Indeed this case is not covered, thus making the algorithm faulty even on an abstract logical level. --[[User:Svenman|Svenman]] ([[User talk:Svenman|talk]]) 14:33, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
the Request organ donation/Remove organs part reminds me of Live Organ Transplants segment in ''{{w|Monty Python's The Meaning of Life}}''. --[[User:Valepert|valepert]] ([[User talk:Valepert|talk]]) 12:53, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
100 could be a reference to 4 in binary (4+ limbs / less than 4 limbs) [[Special:Contributions/141.101.99.39|141.101.99.39]] 12:59, 21 December 2015 (UTC)&lt;br /&gt;
:I believe you're correct. [[User:Mikemk|Mikemk]] ([[User talk:Mikemk|talk]]) 15:17, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
I think GlaDOS is a descendent from this Watson. [[User:Mikemk|Mikemk]] ([[User talk:Mikemk|talk]]) 15:17, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
The part about squeezing and looking for the color of the ooze seems to reference Humorism. The colors match the four humors. [[Special:Contributions/162.158.91.188|162.158.91.188]] 15:31, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
I'm surprised he didn't make a Dr Watson joke/reference.--[[User:R0hrshach|R0hrshach]] ([[User talk:R0hrshach|talk]]) 17:33, 21 December 2015 (UTC)&lt;br /&gt;
&lt;br /&gt;
This algorithm certainly does not exit without the death of the patient, however, such a death can result from old age as long as the patient can make it to the bottom right infinite loop and continuously reports a number from 0-9 for pain. It IS possible to make it to that loop alive. Extremely low blood oxygen levels have been recorded in healthy Everest climbers, but the article I read gave the results in kilopascals, not in % so I don't know how that converts. However, repeatedly reporting a pain level of 0-8 would result in continuous scalp massages, which may actually be considered pleasant.&lt;/div&gt;</summary>
		<author><name>Kashim</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=1613:_The_Three_Laws_of_Robotics&amp;diff=106558</id>
		<title>1613: The Three Laws of Robotics</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=1613:_The_Three_Laws_of_Robotics&amp;diff=106558"/>
				<updated>2015-12-07T19:33:21Z</updated>
		
		<summary type="html">&lt;p&gt;Kashim: fixing indenting and spacing.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 1613&lt;br /&gt;
| date      = December 7, 2015&lt;br /&gt;
| title     = The Three Laws of Robotics&lt;br /&gt;
| image     = the_three_laws_of_robotics.png&lt;br /&gt;
| titletext = In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Very basic first draft, and I'm pretty inexperienced [[User:Halfhat|Halfhat]] ([[User talk:Halfhat|talk]])  09:38, 7 December 2015 (UTC) you should also check my awful spelling [[User:Halfhat|Halfhat]] ([[User talk:Halfhat|talk]]) 09:46, 7 December 2015 (UTC)}}&lt;br /&gt;
This comic explores alternative orderings of sci-fi author Isaac Asimov's famous {{w|Three Laws of Robotics}}, which are designed to prevent robots from taking over the world, etc. These laws form the basis of a number of Asimov works of fiction, including most famously, &amp;quot;I, Robot&amp;quot;. The comic answers the generally unasked question: &amp;quot;Why are they in that order?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Here, the alternative orderings of the three worlds are illustrated. Two of the possibilities are designated yellow (pretty bad) and three of them are designated red (&amp;quot;Hellscape&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
;Ordering #1: This is the original ordering. If they are not allowed to harm humans, no harm will be done if they fall into the hands of a mass-murderer, and there can be no robot laws. So long as they do not harm humans, they must obey orders. Their own self-preservation is last.&lt;br /&gt;
;Ordering #2: The robots value their existence over their job and so many would refuse to do their tasks. The silliness of this is portrayed in the accompanying image, where the robot laughs at the idea of doing what it was clearly built to do (explore mars) because of the risk. This personification is augmented by the robot being switched on on Earth and ordered by the fleshy human known as [[Megan]]. The personification is humorous since it is a very nonhuman robot. &lt;br /&gt;
;Ordering #3: This puts obeying orders above not harming humans which means anyone could send them on a killing spree, resulting in a &amp;quot;Killbot Hellscape&amp;quot;.  It should also be noted humor is derived from the superlative nature of &amp;quot;Killbot Hellscape&amp;quot;, as well as its over the top accompanying image, where there are multiple mushroom clouds (not necessarily nuclear).  It also appears there are no humans, only robots. &lt;br /&gt;
;Ordering #4:The next would also result in much the same, the only difference here is that they would be willing to kill humans to protect themselves. &lt;br /&gt;
;Ordering #5:The penultimate  would result in a unpleasant world, though not a full Hellscape, where the robots would not only disobey to protect themselves, but also kill if necessary. The absurdity of this one is further demonstrated with the very unhuman robot happily doing repetitive mundane tasks but then threatening its user, the terrified relic of the age of men known as [[Cueball]]. &lt;br /&gt;
;Ordering #6:The last also results in a Hellscape wherein robots not only kill for self defense but will also go on killing sprees if ordered as long as they didn't risk themselves. It is interesting to note that this case may not be correct. The writer seems to have missed the fact that an order to go kill a person or a robot might be dangerous, and thus most robots would likely disobey them in the interest of self-preservation. In fact, the robots may likely not do anything at all, because moving a moving part degrades it, and thus taking any action at all might violate self-preservation.&lt;br /&gt;
To summarise: There are two main distinctions between the 'normal' 3-laws and the variations.  The first is where Self-protection is put ahead of Obedience.  This results in a world where robots may be considered no longer the useful workers for humanity that they are supposed to be.  The second is where Obedience supercedes Harmlessness, and means that robots are ''threats'' to humanity (although only if they are ever given the opportunity to be so).&lt;br /&gt;
&lt;br /&gt;
The former, alone, merely creates frustration, in one scenario.  The latter, alone, allows humans to use robots as their proxies for warfare, as per two scenarios - although the hellscape could be 'easily' avoided if nobody bothers to order to start (or continue) military action.  Both ''together'' upgrade both the frustration and warfare aspects, creating 'unstoppable killing machines' - our only hope is that nobody ''ever'' orders them into killing-mode, or gives them cause to consider themselves under threat, resulting in an uneasy peace on the perpetual edge of tipping over into war.&lt;br /&gt;
&lt;br /&gt;
The third 'law inversion', with Self-protection being put ahead of Harmlessness, is necessarily inherent in the 'worst' Killbot Hellscape scenario, whilst really only adds a nuance between the first two Hellscape scenarios, where the orders themselves are not explicitly anti-human.&lt;br /&gt;
&lt;br /&gt;
The titletext further adds to ordering #5 by noting anyone wishing to trade in their self-driving car could be killed, despite it (currently) being a standard and mundane and (mostly) risk free activity.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
{{incomplete transcript}}&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Artificial Intelligence]]&lt;br /&gt;
[[Category:Comics featuring Cueball]]&lt;br /&gt;
[[Category:Comics featuring Megan]]&lt;/div&gt;</summary>
		<author><name>Kashim</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=1613:_The_Three_Laws_of_Robotics&amp;diff=106557</id>
		<title>1613: The Three Laws of Robotics</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=1613:_The_Three_Laws_of_Robotics&amp;diff=106557"/>
				<updated>2015-12-07T19:32:32Z</updated>
		
		<summary type="html">&lt;p&gt;Kashim: unbolding that text.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 1613&lt;br /&gt;
| date      = December 7, 2015&lt;br /&gt;
| title     = The Three Laws of Robotics&lt;br /&gt;
| image     = the_three_laws_of_robotics.png&lt;br /&gt;
| titletext = In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Very basic first draft, and I'm pretty inexperienced [[User:Halfhat|Halfhat]] ([[User talk:Halfhat|talk]])  09:38, 7 December 2015 (UTC) you should also check my awful spelling [[User:Halfhat|Halfhat]] ([[User talk:Halfhat|talk]]) 09:46, 7 December 2015 (UTC)}}&lt;br /&gt;
This comic explores alternative orderings of sci-fi author Isaac Asimov's famous {{w|Three Laws of Robotics}}, which are designed to prevent robots from taking over the world, etc. These laws form the basis of a number of Asimov works of fiction, including most famously, &amp;quot;I, Robot&amp;quot;. The comic answers the generally unasked question: &amp;quot;Why are they in that order?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Here, the alternative orderings of the three worlds are illustrated. Two of the possibilities are designated yellow (pretty bad) and three of them are designated red (&amp;quot;Hellscape&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
;Ordering #1: This is the original ordering. If they are not allowed to harm humans, no harm will be done if they fall into the hands of a mass-murderer, and there can be no robot laws. So long as they do not harm humans, they must obey orders. Their own self-preservation is last.&lt;br /&gt;
;Ordering #2: The robots value their existence over their job and so many would refuse to do their tasks. The silliness of this is portrayed in the accompanying image, where the robot laughs at the idea of doing what it was clearly built to do (explore mars) because of the risk. This personification is augmented by the robot being switched on on Earth and ordered by the fleshy human known as [[Megan]]. The personification is humorous since it is a very nonhuman robot. &lt;br /&gt;
;Ordering #3: This puts obeying orders above not harming humans which means anyone could send them on a killing spree, resulting in a &amp;quot;Killbot Hellscape&amp;quot;.  It should also be noted humor is derived from the superlative nature of &amp;quot;Killbot Hellscape&amp;quot;, as well as its over the top accompanying image, where there are multiple mushroom clouds (not necessarily nuclear).  It also appears there are no humans, only robots. &lt;br /&gt;
;Ordering #4:The next would also result in much the same, the only difference here is that they would be willing to kill humans to protect themselves. &lt;br /&gt;
;Ordering #5:The penultimate  would result in a unpleasant world, though not a full Hellscape, where the robots would not only disobey to protect themselves, but also kill if necessary. The absurdity of this one is further demonstrated with the very unhuman robot happily doing repetitive mundane tasks but then threatening its user, the terrified relic of the age of men known as [[Cueball]]. &lt;br /&gt;
;Ordering #6:The last also results in a Hellscape wherein robots not only kill for self defense but will also go on killing sprees if ordered as long as they didn't risk themselves.&lt;br /&gt;
It is interesting to note that this case may not be correct. The writer seems to have missed the fact that an order to go kill a person or a robot might be dangerous, and thus most robots would likely disobey them in the interest of self-preservation. In fact, the robots may likely not do anything at all, because moving a moving part degrades it, and thus taking any action at all might violate self-preservation.&lt;br /&gt;
To summarise: There are two main distinctions between the 'normal' 3-laws and the variations.  The first is where Self-protection is put ahead of Obedience.  This results in a world where robots may be considered no longer the useful workers for humanity that they are supposed to be.  The second is where Obedience supercedes Harmlessness, and means that robots are ''threats'' to humanity (although only if they are ever given the opportunity to be so).&lt;br /&gt;
&lt;br /&gt;
The former, alone, merely creates frustration, in one scenario.  The latter, alone, allows humans to use robots as their proxies for warfare, as per two scenarios - although the hellscape could be 'easily' avoided if nobody bothers to order to start (or continue) military action.  Both ''together'' upgrade both the frustration and warfare aspects, creating 'unstoppable killing machines' - our only hope is that nobody ''ever'' orders them into killing-mode, or gives them cause to consider themselves under threat, resulting in an uneasy peace on the perpetual edge of tipping over into war.&lt;br /&gt;
&lt;br /&gt;
The third 'law inversion', with Self-protection being put ahead of Harmlessness, is necessarily inherent in the 'worst' Killbot Hellscape scenario, whilst really only adds a nuance between the first two Hellscape scenarios, where the orders themselves are not explicitly anti-human.&lt;br /&gt;
&lt;br /&gt;
The titletext further adds to ordering #5 by noting anyone wishing to trade in their self-driving car could be killed, despite it (currently) being a standard and mundane and (mostly) risk free activity.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
{{incomplete transcript}}&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Artificial Intelligence]]&lt;br /&gt;
[[Category:Comics featuring Cueball]]&lt;br /&gt;
[[Category:Comics featuring Megan]]&lt;/div&gt;</summary>
		<author><name>Kashim</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=1613:_The_Three_Laws_of_Robotics&amp;diff=106556</id>
		<title>1613: The Three Laws of Robotics</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=1613:_The_Three_Laws_of_Robotics&amp;diff=106556"/>
				<updated>2015-12-07T19:31:52Z</updated>
		
		<summary type="html">&lt;p&gt;Kashim: Added an observance about the last case.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| number    = 1613&lt;br /&gt;
| date      = December 7, 2015&lt;br /&gt;
| title     = The Three Laws of Robotics&lt;br /&gt;
| image     = the_three_laws_of_robotics.png&lt;br /&gt;
| titletext = In ordering #5, self-driving cars will happily drive you around, but if you tell them to drive to a car dealership, they just lock the doors and politely ask how long humans take to starve to death.&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
{{incomplete|Very basic first draft, and I'm pretty inexperienced [[User:Halfhat|Halfhat]] ([[User talk:Halfhat|talk]])  09:38, 7 December 2015 (UTC) you should also check my awful spelling [[User:Halfhat|Halfhat]] ([[User talk:Halfhat|talk]]) 09:46, 7 December 2015 (UTC)}}&lt;br /&gt;
This comic explores alternative orderings of sci-fi author Isaac Asimov's famous {{w|Three Laws of Robotics}}, which are designed to prevent robots from taking over the world, etc. These laws form the basis of a number of Asimov works of fiction, including most famously, &amp;quot;I, Robot&amp;quot;. The comic answers the generally unasked question: &amp;quot;Why are they in that order?&amp;quot;&lt;br /&gt;
&lt;br /&gt;
Here, the alternative orderings of the three worlds are illustrated. Two of the possibilities are designated yellow (pretty bad) and three of them are designated red (&amp;quot;Hellscape&amp;quot;).&lt;br /&gt;
&lt;br /&gt;
;Ordering #1: This is the original ordering. If they are not allowed to harm humans, no harm will be done if they fall into the hands of a mass-murderer, and there can be no robot laws. So long as they do not harm humans, they must obey orders. Their own self-preservation is last.&lt;br /&gt;
;Ordering #2: The robots value their existence over their job and so many would refuse to do their tasks. The silliness of this is portrayed in the accompanying image, where the robot laughs at the idea of doing what it was clearly built to do (explore mars) because of the risk. This personification is augmented by the robot being switched on on Earth and ordered by the fleshy human known as [[Megan]]. The personification is humorous since it is a very nonhuman robot. &lt;br /&gt;
;Ordering #3: This puts obeying orders above not harming humans which means anyone could send them on a killing spree, resulting in a &amp;quot;Killbot Hellscape&amp;quot;.  It should also be noted humor is derived from the superlative nature of &amp;quot;Killbot Hellscape&amp;quot;, as well as its over the top accompanying image, where there are multiple mushroom clouds (not necessarily nuclear).  It also appears there are no humans, only robots. &lt;br /&gt;
;Ordering #4:The next would also result in much the same, the only difference here is that they would be willing to kill humans to protect themselves. &lt;br /&gt;
;Ordering #5:The penultimate  would result in a unpleasant world, though not a full Hellscape, where the robots would not only disobey to protect themselves, but also kill if necessary. The absurdity of this one is further demonstrated with the very unhuman robot happily doing repetitive mundane tasks but then threatening its user, the terrified relic of the age of men known as [[Cueball]]. &lt;br /&gt;
;Ordering #6:The last also results in a Hellscape wherein robots not only kill for self defense but will also go on killing sprees if ordered as long as they didn't risk themselves.&lt;br /&gt;
;It is interesting to note that this case may not be correct. The writer seems to have missed the fact that an order to go kill a person or a robot might be dangerous, and thus most robots would likely disobey them in the interest of self-preservation. In fact, the robots may likely not do anything at all, because moving a moving part degrades it, and thus taking any action at all might violate self-preservation.&lt;br /&gt;
To summarise: There are two main distinctions between the 'normal' 3-laws and the variations.  The first is where Self-protection is put ahead of Obedience.  This results in a world where robots may be considered no longer the useful workers for humanity that they are supposed to be.  The second is where Obedience supercedes Harmlessness, and means that robots are ''threats'' to humanity (although only if they are ever given the opportunity to be so).&lt;br /&gt;
&lt;br /&gt;
The former, alone, merely creates frustration, in one scenario.  The latter, alone, allows humans to use robots as their proxies for warfare, as per two scenarios - although the hellscape could be 'easily' avoided if nobody bothers to order to start (or continue) military action.  Both ''together'' upgrade both the frustration and warfare aspects, creating 'unstoppable killing machines' - our only hope is that nobody ''ever'' orders them into killing-mode, or gives them cause to consider themselves under threat, resulting in an uneasy peace on the perpetual edge of tipping over into war.&lt;br /&gt;
&lt;br /&gt;
The third 'law inversion', with Self-protection being put ahead of Harmlessness, is necessarily inherent in the 'worst' Killbot Hellscape scenario, whilst really only adds a nuance between the first two Hellscape scenarios, where the orders themselves are not explicitly anti-human.&lt;br /&gt;
&lt;br /&gt;
The titletext further adds to ordering #5 by noting anyone wishing to trade in their self-driving car could be killed, despite it (currently) being a standard and mundane and (mostly) risk free activity.&lt;br /&gt;
&lt;br /&gt;
==Transcript==&lt;br /&gt;
{{incomplete transcript}}&lt;br /&gt;
&lt;br /&gt;
{{comic discussion}}&lt;br /&gt;
&lt;br /&gt;
[[Category:Artificial Intelligence]]&lt;br /&gt;
[[Category:Comics featuring Cueball]]&lt;br /&gt;
[[Category:Comics featuring Megan]]&lt;/div&gt;</summary>
		<author><name>Kashim</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=Blue_Eyes&amp;diff=95400</id>
		<title>Blue Eyes</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=Blue_Eyes&amp;diff=95400"/>
				<updated>2015-06-12T21:16:19Z</updated>
		
		<summary type="html">&lt;p&gt;Kashim: /* Another Solution? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| date      = October 11, 2006&lt;br /&gt;
| title     = Blue Eyes&lt;br /&gt;
| image    = Blue Eyes.jpg&lt;br /&gt;
| before    = The Hardest Logic Puzzle in the World&lt;br /&gt;
| lappend   = blue_eyes.html&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;div class=&amp;quot;toclimit-3&amp;quot; style=&amp;quot;float:right; margin-left: 10px;&amp;quot;&amp;gt;__TOC__&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
XKCD's [http://xkcd.com/blue_eyes.html Blue Eyes] puzzle is a logic puzzle posted around the same time as comic [[169: Words that End in GRY]].  [[Randall]] calls it &amp;quot;The Hardest Logic Puzzle in the World&amp;quot; on its page;  whether or not it really is the hardest is up to speculation.&lt;br /&gt;
&lt;br /&gt;
The page contains two comics.  On the top is [[82: Frame]], and at the bottom is [[37: Hyphen]]. These particular comics may have been chosen intentionally, as 82 involves a mind screw (and formal logic can be pretty mind-screwy to the uninitiated) and 37 involves linguistic ambiguity, which Randall has explicitly gone out of his way to avoid (interestingly, [[169]] involves the infuriating ambiguity caused by misquoting riddles). That said, Randall could have simply picked those comics out of a hat to plug for his comic (which he also does explicitly), and the date of release could also be completely random.&lt;br /&gt;
&lt;br /&gt;
Randall cites &amp;quot;some dude on the streets in Boston named Joel&amp;quot; as his source for the comic idea (although he's rewritten it).&lt;br /&gt;
&lt;br /&gt;
==The Puzzle==&lt;br /&gt;
&lt;br /&gt;
  A group of people with assorted eye colors live on an island. They are all perfect logicians -- if a conclusion can be logically deduced, they will do it instantly. No one knows the color of their eyes. Every night at midnight, a ferry stops at the island. Any islanders who have figured out the color of their own eyes then leave the island, and the rest stay. Everyone can see everyone else at all times and keeps a count of the number of people they see with each eye color (excluding themselves), but they cannot otherwise communicate. Everyone on the island knows all the rules in this paragraph.&lt;br /&gt;
  &lt;br /&gt;
  On this island there are 100 blue-eyed people, 100 brown-eyed people, and the Guru (she happens to have green eyes). So any given blue-eyed person can see 100 people with brown eyes and 99 people with blue eyes (and one with green), but that does not tell him his own eye color; as far as he knows the totals could be 101 brown and 99 blue. Or 100 brown, 99 blue, and he could have red eyes.&lt;br /&gt;
  &lt;br /&gt;
  The Guru is allowed to speak once (let's say at noon), on one day in all their endless years on the island. Standing before the islanders, she says the following:&lt;br /&gt;
  &lt;br /&gt;
  &amp;quot;I can see someone who has blue eyes.&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  Who leaves the island, and on what night?&lt;br /&gt;
&lt;br /&gt;
==Solution==&lt;br /&gt;
&lt;br /&gt;
Randal's solution is at [http://xkcd.com/solution.html xkcd.com/solution.html].&lt;br /&gt;
&lt;br /&gt;
Here are some observations that help simplify the problem.&lt;br /&gt;
&lt;br /&gt;
No one without blue eyes will ever leave the island, because they are given no information that can allow them to determine which non-blue eye color they have.  The presence of the non-blue-eyed people is not relevant at all.  We can ignore them.  All that matters is when the blue eyed people learn that they actually are blue-eyed.&lt;br /&gt;
&lt;br /&gt;
There are two ways in which blue-eyed people might leave the island.  A lone blue-eyed person might leave on the first night because she can see that no one else has blue eyes, so the guru must have been talking about her.  Or an accompanied blue-eyed person can leave on a later night, after noticing that other blue-eyed people have behaved in a way that indicates that they have noticed that her eyes are blue too.&lt;br /&gt;
&lt;br /&gt;
The problem is symmetrical for all blue-eyed people, so this means they will either all leave at once or all stay forever.&lt;br /&gt;
&lt;br /&gt;
'''Theorem:'''  If there are N blue-eyed people, they will all leave on the Nth night.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Dual Logic.'''&lt;br /&gt;
&lt;br /&gt;
Blue eyed people leave on the 100th night.&lt;br /&gt;
&lt;br /&gt;
If you (the person) have blue eyes then you can see 99 blue eyed and 100 brown eyed people (and one green eyed, the Guru).&lt;br /&gt;
If 99 blue eyed people don't leave on the 99th night then you know you have blue eyes and you will leave on the 100th night knowing so.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Intuitive Proof.'''&lt;br /&gt;
&lt;br /&gt;
Imagine a simpler version of the puzzle in which, on day #1 the guru announces that she can see at least 1 blue-eyed person, on day #2 she announces that she can see at least 2 blue eyed people, and so on until the blue-eyed people leave. &lt;br /&gt;
&lt;br /&gt;
So long as the guru's count of blue-eyed people doesn't exceed your own, then her announcement won't prompt you to leave.  But as soon as the guru announces having seen more blue-eyed people than you've seen yourself, then you'll know your eyes must be blue too, so you'll leave that night, as will all the other blue-eyed people.  Hence our theorem obviously holds in this simpler puzzle.&lt;br /&gt;
&lt;br /&gt;
But this &amp;quot;simpler&amp;quot; puzzle is actually perfectly equivalent to the original puzzle.  If there were just one blue-eyed person, she would leave on the first night, so if nobody leaves on the first night, then everybody will know there are at least two blue-eyed people, so there's no need for the guru to announce this on the second day.  Similarly, if there were just two blue-eyed people, they'd then recognize this and leave on the second night, so if nobody leaves on the second night, then there must be a third blue-eyed person inspiring them to stay, so there's no need for the guru to announce this on the third day.  And so on...  The guru's announcements on the later days just tell people things they already could have figured out on their own.&lt;br /&gt;
&lt;br /&gt;
It's obvious that our theorem holds for the &amp;quot;simpler&amp;quot; puzzle, and this &amp;quot;simpler&amp;quot; puzzle is perfectly equivalent to the original puzzle, so our theorem must hold for the original puzzle too.&lt;br /&gt;
&lt;br /&gt;
'''Formal Proof.'''&lt;br /&gt;
&lt;br /&gt;
To prove this more formally, we can use mathematical induction.  To do that, we'll need to show that our theorem holds for the base case of N=1, and we'll need to show that, for any given X, *if* we assume that the theorem holds for any value of N less than X, then it will also hold for N=X.  If we can show both these things, then we'll know the theorem is true for N=1 (the base case), for N=2 (using the inductive step once), for N=3 (using the inductive step a second time) and so on, for whatever value of N you want.&lt;br /&gt;
&lt;br /&gt;
Base case:  N=1.  If there is just one blue-eyed person, she will see that no one else has blue eyes, know that the guru was talking about her, and leave on the first night.&lt;br /&gt;
&lt;br /&gt;
Inductive step:  Here we assume that the theorem holds for any value of N less than some arbitrary X (integer greater than 1), and we need to show that it would then hold for N=X too.  If there are X blue-eyed people, then each will reason as follows:  &amp;quot;I can see that X-1 other people have blue eyes, so either just those X-1 people have blue eyes, or X people do (them plus me).  If there are just X-1 people with blue eyes, then by our assumption, they'll all leave on night number X-1.  If they don't all leave on night number X-1, then that means that there is an Xth blue-eyed person in addition to the X-1 that I can see, namely me.  So if they all stay past night number X-1, then I'll know I have blue eyes, so I'll leave on night number X.  Of course, they'll also be in exactly the same circumstance as me, so they'll leave on night number X too.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This suffices to prove our theorem.  The base case tells us the theorem holds for N=1.  That together with the inductive step tells us that it therefore holds for N=2, and that together with the inductive step again tells us that it holds for N=3, and so on...  In particular, it holds for the case the original puzzle asked about, N=100, so we get the conclusion that the 100 blue-eyed people will leave on the 100th night.&lt;br /&gt;
&lt;br /&gt;
==Randall's thought-provoking questions==&lt;br /&gt;
&lt;br /&gt;
After giving his solution, Randall posed three questions for further thought about the puzzle.  (I'll answer them in a different order than he asked.)&lt;br /&gt;
&lt;br /&gt;
'' '''Question 2.''' Each person knows, from the beginning, that there are no less than 99 blue-eyed people on the island. How, then, is considering the 1 and 2-person cases relevant, if they can all rule them out immediately as possibilities?''&lt;br /&gt;
&lt;br /&gt;
Blue-eyed people can't see their own faces, so blue-eyed people can see one less blue-eyed face than non-blue-eyed people can.  Even though I can see that there are at least 99 blue-eyed people, I don't know that they can see that, so I need to imagine people who see only 98, who would base their actions in part by imagining people who can see only 97 who would base their actions in part by imagining people who can see only 96, and so on...  All the levels are relevant.  (It's like [https://www.youtube.com/watch?v=U_eZmEiyTo0 the Princess Bride scene] where Vizzini is trying to think about what Wesley would choose in part based upon Wesley thinking about what Vizzini would choose in part based upon...  &amp;quot;So clearly I cannot choose the one in front of me!&amp;quot;)  Each layer of thinking about what someone else might be thinking about can decrement by 1 the number of blue-eyed people visible to the lattermost imagined person, so it turns out that even the base case with N=1 blue-eyed person is relevant.  As the days go by, some of the more far-fetched &amp;quot;he might be thinking that I might be thinking that he might be thinking that I might be thinking that...&amp;quot; hypotheses get ruled out.  But it's only after night N-1 that the blue-eyed people rule out all the possibilities in which they have brown eyes, whereas the brown-eyed people only learn on night number N that they don't have blue eyes.&lt;br /&gt;
&lt;br /&gt;
It might help to think of all the different situations people might be in.  (Remember brown-eyed people always are situated where they can see one more blue-eyed face than blue-eyed people can.)&lt;br /&gt;
&lt;br /&gt;
  '''Situation 0.''' If I see 0 blue-eyed people, I can leave right after the announcement on night 1.&lt;br /&gt;
  '''Situation 1.''' If I see 1 blue-eyed person, then she might be in situation 0 and about to leave on night 1; or else she might be in situation 1 just like me, in which case we'll both leave together on night 2.&lt;br /&gt;
  '''Situation 2.''' If I see 2 blue-eyed people, they might each be in situation 1 watching to see whether anyone in situation 0 leaves the first night (I know nobody will leave that night, but they wouldn't know this), in which case they would leave together on night 2; or else they might be in situation 2 just like me, in which case we'll all leave together on night 3.&lt;br /&gt;
  :&lt;br /&gt;
  :&lt;br /&gt;
  '''Situation N.''' If I see N blue-eyed people, they might be in situation N-1 watching to see whether any people in situation N-2 leave on night N-1 (I know nobody will leave that night, but they wouldn't know this), in which case they would leave together on night N; or else they might be in situation N just like me, in which case we'll all leave together on night N+1.&lt;br /&gt;
  :&lt;br /&gt;
  :&lt;br /&gt;
&lt;br /&gt;
Even though I start out in situation 99, I need to worry that the blue-eyed people might be in situation 98, so I need to wait long enough for people in situation 98 to figure out what's going on, and then see whether they act like they are indeed in situation 98.  But if they're in situation 98, then they're worrying about whether all the blue-eyed people might be in situation 97, so they're going to need to wait long enough for people in situation 97 to figure out what's going on.  Of course, that requires waiting long enough for people in situation 96 to figure out what's going on, and so on, down all the way to situation 0.  All the levels are relevant, and it takes a separate day to eliminate each level, which is why the whole process takes N days.&lt;br /&gt;
&lt;br /&gt;
'' '''Question 3.''' Why do they have to wait 99 nights if, on the first 98 or so of these nights, they're simply verifying something that they already know?&lt;br /&gt;
''&lt;br /&gt;
&lt;br /&gt;
Consider an analogy.  I've heard that miners used to take canaries down into mines because canaries pass out more quickly in poor air than miners do.  Suppose you know the canary will do fine for 98 or so seconds, and then pass out if the air is bad.  As you watch the canary for those 98 seconds, there's a sense in which you're just verifying something you already know (it'll do fine), but it seems more accurate to say that your best detector for the quality of the air takes 98 seconds to give you a reading, and you're waiting 98 seconds to see what that reading is.&lt;br /&gt;
&lt;br /&gt;
When the blue-eyed people wait 98 or so days to leave, that's because their best available detector of their own eye-color takes 98 or so days to give a reading.  (This detector involves watching what the other blue-eyed people do, and of course they themselves are waiting on a detector that takes 97 or so days to yield its result...)  There's a sense in which they're &amp;quot;simply verifying something that they already know&amp;quot;, but it seems more accurate to say that they're waiting for their best available detector of their own eye-color to deliver its reading. &lt;br /&gt;
&lt;br /&gt;
'' '''Question 1.''' What is the quantified piece of information that the Guru provides that each person did not already have?''&lt;br /&gt;
&lt;br /&gt;
Before the Guru speaks, the hypothetical chain of A imagining B imaging C imagining D...imagining Z seeing N blue eyed people cannot terminate uniquely. Z seeing no blue eyed people can consider whether or not they are blue eyed. This means it is not {{w|Common knowledge (logic)|common knowledge}} that there are blue eyes. Once the guru makes their pronouncement it is common knowledge and every chain of reasoning must terminate at 1 blue eyed person and Z above would have to conclude that they had blue eyes. From then on every midnight the common knowledge that there are N blue eyed people increments by 1 as everyone sees nobody leaving on the ferry.&lt;br /&gt;
&lt;br /&gt;
Stated another way, there's only one stable set of beliefs for the blue eyed people that would allow them to have so many exist on the island indefinitely.  That is if each blue eyed person believed not only that they have brown eyes, but also that every other blue-eyed person believed, incorrectly, that they had brown eyes.  Logic reduces this to &amp;quot;all blue-eyes believe that all blues-eyes have brown eyes&amp;quot;.  The Guru eliminates that particular possibility.&lt;br /&gt;
&lt;br /&gt;
== Another Solution? ==&lt;br /&gt;
&lt;br /&gt;
Each person, after coming to this perfectly logical conclusion, pushes the other people they see into similar eye color groupings and out of the wrong groupings. This eventually converges on all people being pushed into the right place. Everybody, but the Guru can leave on the first night, because nobody has the same eye color as the Guru. &lt;br /&gt;
&lt;br /&gt;
However, this could be seen as &amp;quot;communication&amp;quot; and could therefore violate the rule that says &amp;quot;and they cannot otherwise communicate&amp;quot;. In fact, one must assume that &amp;quot;leave the island&amp;quot; is done by way of some kind of instantaneous teleportation, and not by a conventional means like a boat. Otherwise, the act of lining up for the boat would be a form of communication, and &amp;quot;no one is lining up for the boat yet&amp;quot; would become an additional piece of shared information that could potentially be used to shorten the timeline. This is why the note about &amp;quot;It doesn't involve people doing something simple like creating a sign language or doing genetics&amp;quot; is included. Even an imperfect logician can tell if someone points to their eyes, and then points to dirt, that their eyes are brown, or points to the sky to indicate blue, or to the grass to indicate green.&lt;br /&gt;
A way to improve this problem might be to say that each person on the island is in a tiny cabin, and each day they are given a book containing pictures of the faces of the other people who are still on the island. Only once they have figured out what color their own eyes are can they exit their cabins. This would make it more clear that the people could not communicate with each other in any way.&lt;br /&gt;
&lt;br /&gt;
[[Category:Meta]]&lt;br /&gt;
[[Category:Extra Comics]]&lt;/div&gt;</summary>
		<author><name>Kashim</name></author>	</entry>

	<entry>
		<id>https://www.explainxkcd.com/wiki/index.php?title=Blue_Eyes&amp;diff=95399</id>
		<title>Blue Eyes</title>
		<link rel="alternate" type="text/html" href="https://www.explainxkcd.com/wiki/index.php?title=Blue_Eyes&amp;diff=95399"/>
				<updated>2015-06-12T21:15:50Z</updated>
		
		<summary type="html">&lt;p&gt;Kashim: /* Another Solution? */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{comic&lt;br /&gt;
| date      = October 11, 2006&lt;br /&gt;
| title     = Blue Eyes&lt;br /&gt;
| image    = Blue Eyes.jpg&lt;br /&gt;
| before    = The Hardest Logic Puzzle in the World&lt;br /&gt;
| lappend   = blue_eyes.html&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;div class=&amp;quot;toclimit-3&amp;quot; style=&amp;quot;float:right; margin-left: 10px;&amp;quot;&amp;gt;__TOC__&amp;lt;/div&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Explanation==&lt;br /&gt;
XKCD's [http://xkcd.com/blue_eyes.html Blue Eyes] puzzle is a logic puzzle posted around the same time as comic [[169: Words that End in GRY]].  [[Randall]] calls it &amp;quot;The Hardest Logic Puzzle in the World&amp;quot; on its page;  whether or not it really is the hardest is up to speculation.&lt;br /&gt;
&lt;br /&gt;
The page contains two comics.  On the top is [[82: Frame]], and at the bottom is [[37: Hyphen]]. These particular comics may have been chosen intentionally, as 82 involves a mind screw (and formal logic can be pretty mind-screwy to the uninitiated) and 37 involves linguistic ambiguity, which Randall has explicitly gone out of his way to avoid (interestingly, [[169]] involves the infuriating ambiguity caused by misquoting riddles). That said, Randall could have simply picked those comics out of a hat to plug for his comic (which he also does explicitly), and the date of release could also be completely random.&lt;br /&gt;
&lt;br /&gt;
Randall cites &amp;quot;some dude on the streets in Boston named Joel&amp;quot; as his source for the comic idea (although he's rewritten it).&lt;br /&gt;
&lt;br /&gt;
==The Puzzle==&lt;br /&gt;
&lt;br /&gt;
  A group of people with assorted eye colors live on an island. They are all perfect logicians -- if a conclusion can be logically deduced, they will do it instantly. No one knows the color of their eyes. Every night at midnight, a ferry stops at the island. Any islanders who have figured out the color of their own eyes then leave the island, and the rest stay. Everyone can see everyone else at all times and keeps a count of the number of people they see with each eye color (excluding themselves), but they cannot otherwise communicate. Everyone on the island knows all the rules in this paragraph.&lt;br /&gt;
  &lt;br /&gt;
  On this island there are 100 blue-eyed people, 100 brown-eyed people, and the Guru (she happens to have green eyes). So any given blue-eyed person can see 100 people with brown eyes and 99 people with blue eyes (and one with green), but that does not tell him his own eye color; as far as he knows the totals could be 101 brown and 99 blue. Or 100 brown, 99 blue, and he could have red eyes.&lt;br /&gt;
  &lt;br /&gt;
  The Guru is allowed to speak once (let's say at noon), on one day in all their endless years on the island. Standing before the islanders, she says the following:&lt;br /&gt;
  &lt;br /&gt;
  &amp;quot;I can see someone who has blue eyes.&amp;quot;&lt;br /&gt;
  &lt;br /&gt;
  Who leaves the island, and on what night?&lt;br /&gt;
&lt;br /&gt;
==Solution==&lt;br /&gt;
&lt;br /&gt;
Randal's solution is at [http://xkcd.com/solution.html xkcd.com/solution.html].&lt;br /&gt;
&lt;br /&gt;
Here are some observations that help simplify the problem.&lt;br /&gt;
&lt;br /&gt;
No one without blue eyes will ever leave the island, because they are given no information that can allow them to determine which non-blue eye color they have.  The presence of the non-blue-eyed people is not relevant at all.  We can ignore them.  All that matters is when the blue eyed people learn that they actually are blue-eyed.&lt;br /&gt;
&lt;br /&gt;
There are two ways in which blue-eyed people might leave the island.  A lone blue-eyed person might leave on the first night because she can see that no one else has blue eyes, so the guru must have been talking about her.  Or an accompanied blue-eyed person can leave on a later night, after noticing that other blue-eyed people have behaved in a way that indicates that they have noticed that her eyes are blue too.&lt;br /&gt;
&lt;br /&gt;
The problem is symmetrical for all blue-eyed people, so this means they will either all leave at once or all stay forever.&lt;br /&gt;
&lt;br /&gt;
'''Theorem:'''  If there are N blue-eyed people, they will all leave on the Nth night.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Dual Logic.'''&lt;br /&gt;
&lt;br /&gt;
Blue eyed people leave on the 100th night.&lt;br /&gt;
&lt;br /&gt;
If you (the person) have blue eyes then you can see 99 blue eyed and 100 brown eyed people (and one green eyed, the Guru).&lt;br /&gt;
If 99 blue eyed people don't leave on the 99th night then you know you have blue eyes and you will leave on the 100th night knowing so.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
'''Intuitive Proof.'''&lt;br /&gt;
&lt;br /&gt;
Imagine a simpler version of the puzzle in which, on day #1 the guru announces that she can see at least 1 blue-eyed person, on day #2 she announces that she can see at least 2 blue eyed people, and so on until the blue-eyed people leave. &lt;br /&gt;
&lt;br /&gt;
So long as the guru's count of blue-eyed people doesn't exceed your own, then her announcement won't prompt you to leave.  But as soon as the guru announces having seen more blue-eyed people than you've seen yourself, then you'll know your eyes must be blue too, so you'll leave that night, as will all the other blue-eyed people.  Hence our theorem obviously holds in this simpler puzzle.&lt;br /&gt;
&lt;br /&gt;
But this &amp;quot;simpler&amp;quot; puzzle is actually perfectly equivalent to the original puzzle.  If there were just one blue-eyed person, she would leave on the first night, so if nobody leaves on the first night, then everybody will know there are at least two blue-eyed people, so there's no need for the guru to announce this on the second day.  Similarly, if there were just two blue-eyed people, they'd then recognize this and leave on the second night, so if nobody leaves on the second night, then there must be a third blue-eyed person inspiring them to stay, so there's no need for the guru to announce this on the third day.  And so on...  The guru's announcements on the later days just tell people things they already could have figured out on their own.&lt;br /&gt;
&lt;br /&gt;
It's obvious that our theorem holds for the &amp;quot;simpler&amp;quot; puzzle, and this &amp;quot;simpler&amp;quot; puzzle is perfectly equivalent to the original puzzle, so our theorem must hold for the original puzzle too.&lt;br /&gt;
&lt;br /&gt;
'''Formal Proof.'''&lt;br /&gt;
&lt;br /&gt;
To prove this more formally, we can use mathematical induction.  To do that, we'll need to show that our theorem holds for the base case of N=1, and we'll need to show that, for any given X, *if* we assume that the theorem holds for any value of N less than X, then it will also hold for N=X.  If we can show both these things, then we'll know the theorem is true for N=1 (the base case), for N=2 (using the inductive step once), for N=3 (using the inductive step a second time) and so on, for whatever value of N you want.&lt;br /&gt;
&lt;br /&gt;
Base case:  N=1.  If there is just one blue-eyed person, she will see that no one else has blue eyes, know that the guru was talking about her, and leave on the first night.&lt;br /&gt;
&lt;br /&gt;
Inductive step:  Here we assume that the theorem holds for any value of N less than some arbitrary X (integer greater than 1), and we need to show that it would then hold for N=X too.  If there are X blue-eyed people, then each will reason as follows:  &amp;quot;I can see that X-1 other people have blue eyes, so either just those X-1 people have blue eyes, or X people do (them plus me).  If there are just X-1 people with blue eyes, then by our assumption, they'll all leave on night number X-1.  If they don't all leave on night number X-1, then that means that there is an Xth blue-eyed person in addition to the X-1 that I can see, namely me.  So if they all stay past night number X-1, then I'll know I have blue eyes, so I'll leave on night number X.  Of course, they'll also be in exactly the same circumstance as me, so they'll leave on night number X too.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
This suffices to prove our theorem.  The base case tells us the theorem holds for N=1.  That together with the inductive step tells us that it therefore holds for N=2, and that together with the inductive step again tells us that it holds for N=3, and so on...  In particular, it holds for the case the original puzzle asked about, N=100, so we get the conclusion that the 100 blue-eyed people will leave on the 100th night.&lt;br /&gt;
&lt;br /&gt;
==Randall's thought-provoking questions==&lt;br /&gt;
&lt;br /&gt;
After giving his solution, Randall posed three questions for further thought about the puzzle.  (I'll answer them in a different order than he asked.)&lt;br /&gt;
&lt;br /&gt;
'' '''Question 2.''' Each person knows, from the beginning, that there are no less than 99 blue-eyed people on the island. How, then, is considering the 1 and 2-person cases relevant, if they can all rule them out immediately as possibilities?''&lt;br /&gt;
&lt;br /&gt;
Blue-eyed people can't see their own faces, so blue-eyed people can see one less blue-eyed face than non-blue-eyed people can.  Even though I can see that there are at least 99 blue-eyed people, I don't know that they can see that, so I need to imagine people who see only 98, who would base their actions in part by imagining people who can see only 97 who would base their actions in part by imagining people who can see only 96, and so on...  All the levels are relevant.  (It's like [https://www.youtube.com/watch?v=U_eZmEiyTo0 the Princess Bride scene] where Vizzini is trying to think about what Wesley would choose in part based upon Wesley thinking about what Vizzini would choose in part based upon...  &amp;quot;So clearly I cannot choose the one in front of me!&amp;quot;)  Each layer of thinking about what someone else might be thinking about can decrement by 1 the number of blue-eyed people visible to the lattermost imagined person, so it turns out that even the base case with N=1 blue-eyed person is relevant.  As the days go by, some of the more far-fetched &amp;quot;he might be thinking that I might be thinking that he might be thinking that I might be thinking that...&amp;quot; hypotheses get ruled out.  But it's only after night N-1 that the blue-eyed people rule out all the possibilities in which they have brown eyes, whereas the brown-eyed people only learn on night number N that they don't have blue eyes.&lt;br /&gt;
&lt;br /&gt;
It might help to think of all the different situations people might be in.  (Remember brown-eyed people always are situated where they can see one more blue-eyed face than blue-eyed people can.)&lt;br /&gt;
&lt;br /&gt;
  '''Situation 0.''' If I see 0 blue-eyed people, I can leave right after the announcement on night 1.&lt;br /&gt;
  '''Situation 1.''' If I see 1 blue-eyed person, then she might be in situation 0 and about to leave on night 1; or else she might be in situation 1 just like me, in which case we'll both leave together on night 2.&lt;br /&gt;
  '''Situation 2.''' If I see 2 blue-eyed people, they might each be in situation 1 watching to see whether anyone in situation 0 leaves the first night (I know nobody will leave that night, but they wouldn't know this), in which case they would leave together on night 2; or else they might be in situation 2 just like me, in which case we'll all leave together on night 3.&lt;br /&gt;
  :&lt;br /&gt;
  :&lt;br /&gt;
  '''Situation N.''' If I see N blue-eyed people, they might be in situation N-1 watching to see whether any people in situation N-2 leave on night N-1 (I know nobody will leave that night, but they wouldn't know this), in which case they would leave together on night N; or else they might be in situation N just like me, in which case we'll all leave together on night N+1.&lt;br /&gt;
  :&lt;br /&gt;
  :&lt;br /&gt;
&lt;br /&gt;
Even though I start out in situation 99, I need to worry that the blue-eyed people might be in situation 98, so I need to wait long enough for people in situation 98 to figure out what's going on, and then see whether they act like they are indeed in situation 98.  But if they're in situation 98, then they're worrying about whether all the blue-eyed people might be in situation 97, so they're going to need to wait long enough for people in situation 97 to figure out what's going on.  Of course, that requires waiting long enough for people in situation 96 to figure out what's going on, and so on, down all the way to situation 0.  All the levels are relevant, and it takes a separate day to eliminate each level, which is why the whole process takes N days.&lt;br /&gt;
&lt;br /&gt;
'' '''Question 3.''' Why do they have to wait 99 nights if, on the first 98 or so of these nights, they're simply verifying something that they already know?&lt;br /&gt;
''&lt;br /&gt;
&lt;br /&gt;
Consider an analogy.  I've heard that miners used to take canaries down into mines because canaries pass out more quickly in poor air than miners do.  Suppose you know the canary will do fine for 98 or so seconds, and then pass out if the air is bad.  As you watch the canary for those 98 seconds, there's a sense in which you're just verifying something you already know (it'll do fine), but it seems more accurate to say that your best detector for the quality of the air takes 98 seconds to give you a reading, and you're waiting 98 seconds to see what that reading is.&lt;br /&gt;
&lt;br /&gt;
When the blue-eyed people wait 98 or so days to leave, that's because their best available detector of their own eye-color takes 98 or so days to give a reading.  (This detector involves watching what the other blue-eyed people do, and of course they themselves are waiting on a detector that takes 97 or so days to yield its result...)  There's a sense in which they're &amp;quot;simply verifying something that they already know&amp;quot;, but it seems more accurate to say that they're waiting for their best available detector of their own eye-color to deliver its reading. &lt;br /&gt;
&lt;br /&gt;
'' '''Question 1.''' What is the quantified piece of information that the Guru provides that each person did not already have?''&lt;br /&gt;
&lt;br /&gt;
Before the Guru speaks, the hypothetical chain of A imagining B imaging C imagining D...imagining Z seeing N blue eyed people cannot terminate uniquely. Z seeing no blue eyed people can consider whether or not they are blue eyed. This means it is not {{w|Common knowledge (logic)|common knowledge}} that there are blue eyes. Once the guru makes their pronouncement it is common knowledge and every chain of reasoning must terminate at 1 blue eyed person and Z above would have to conclude that they had blue eyes. From then on every midnight the common knowledge that there are N blue eyed people increments by 1 as everyone sees nobody leaving on the ferry.&lt;br /&gt;
&lt;br /&gt;
Stated another way, there's only one stable set of beliefs for the blue eyed people that would allow them to have so many exist on the island indefinitely.  That is if each blue eyed person believed not only that they have brown eyes, but also that every other blue-eyed person believed, incorrectly, that they had brown eyes.  Logic reduces this to &amp;quot;all blue-eyes believe that all blues-eyes have brown eyes&amp;quot;.  The Guru eliminates that particular possibility.&lt;br /&gt;
&lt;br /&gt;
== Another Solution? ==&lt;br /&gt;
&lt;br /&gt;
Each person, after coming to this perfectly logical conclusion, pushes the other people they see into similar eye color groupings and out of the wrong groupings. This eventually converges on all people being pushed into the right place. Everybody, but the Guru can leave on the first night, because nobody has the same eye color as the Guru. &lt;br /&gt;
However, this could be seen as &amp;quot;communication&amp;quot; and could therefore violate the rule that says &amp;quot;and they cannot otherwise communicate&amp;quot;. In fact, one must assume that &amp;quot;leave the island&amp;quot; is done by way of some kind of instantaneous teleportation, and not by a conventional means like a boat. Otherwise, the act of lining up for the boat would be a form of communication, and &amp;quot;no one is lining up for the boat yet&amp;quot; would become an additional piece of shared information that could potentially be used to shorten the timeline. This is why the note about &amp;quot;It doesn't involve people doing something simple like creating a sign language or doing genetics&amp;quot; is included. Even an imperfect logician can tell if someone points to their eyes, and then points to dirt, that their eyes are brown, or points to the sky to indicate blue, or to the grass to indicate green.&lt;br /&gt;
A way to improve this problem might be to say that each person on the island is in a tiny cabin, and each day they are given a book containing pictures of the faces of the other people who are still on the island. Only once they have figured out what color their own eyes are can they exit their cabins. This would make it more clear that the people could not communicate with each other in any way.&lt;br /&gt;
&lt;br /&gt;
[[Category:Meta]]&lt;br /&gt;
[[Category:Extra Comics]]&lt;/div&gt;</summary>
		<author><name>Kashim</name></author>	</entry>

	</feed>