Talk:1002: Game AIs
Mornington Crescent would be impossible for a computer to play, let alone win... -- 126.96.36.199 (talk) (please sign your comments with ~~~~) It is unclear which side of the line jeopard fall upon. Why so close to the line I wonder. DruidDriver (talk) 01:04, 16 January 2013 (UTC)
- Because of Watson (computer). (Anon) 13 August 2013 188.8.131.52 (talk) (please sign your comments with ~~~~)
- I agree, this is far more likely. 184.108.40.206 10:21, 11 September 2013 (UTC)
On the old blog version of this article, a comment mentioned Ken tweeting his method right after this comic was posted. He joked that they would asphyxiate themselves to actually see heaven for seven minutes. I don't know how to search for tweets, or if they even save them after so much time, but I thought it should be noted. 220.127.116.11 07:11, 27 October 2014 (UTC)
I disagree about the poker part. Reading someone's physical tells is just a small part of the game. Theoretically there is a Nash equilibrium for the game, the reason why it hasn't been found is that the amount of ways a deck can be shuffled is astronomical (even if you just count the cards that you use) and you also have to take into account the various betsizes. A near perfect solution for 2 player limit poker has been found by the Cepheus Poker Project: http://poker.srv.ualberta.ca/.
~ Could the description of tic-tac-toe link to xkcd 832 which explains the strategy? 18.104.22.168 13:13, 27 January 2016 (UTC)
Saying that computers are very close to beating top humans as of January 2016 is misleading at best. There is not enough details in the BBC article, but it sounds like the Facebook program has about a 50% chance of beating 5-dan amateurs. In other words, it needs a 4-stone handicap (read: 4 free moves) to have a 50% chance to win against top-level amateurs, to say nothing about professionals. If a robotic team could have a 50% chance to beating Duke University at football (a skilled amateur team), would you say they were very close to being able to consistently beat the Patriots (a top-level professional)? If anything that underestimates the skill difference in Go, but the general point stands. 22.214.171.124 (talk) (please sign your comments with ~~~~)
- How about bearing one of the top players five times in a row and being scheduled to play against the world champion in March? http://www.engadget.com/2016/01/27/google-s-ai-is-the-first-to-defeat-a-go-champion/ Mikemk (talk) 06:18, 28 January 2016 (UTC)
- However DeepMind ranked AlphaGo close to Fan Hui 2P and the distributed version has being at the upper tier of Fan's level. http://www.nature.com/nature/journal/v529/n7587/fig_tab/nature16961_F4.html
- The official games were 5-0 however the unofficial were 3-2. Averaging to 8-2 in favor of AlphaGo.
- Looking at http://www.goratings.org/ Fan Hui is ranked 631, while Lee Sedol 9P, whom is playing in March, is in the top 126.96.36.199.47 06:12 5 February 2016 (UTC)
- Original poster here (sorry, not sure how to sign). Okay, you all are right. Go AI has advanced a lot more than I had understood. I'm still curious how the game against Lee Sedol will go, but that that is even an interesting question shows how much Go AI has improved. 188.8.131.52 (talk) (please sign your comments with ~~~~)
Is the transcript (currently in table format) accessible for blind users? Should it be? 184.108.40.206 10:48, 19 February 2017 (UTC)
At the very least the transcript needs to be fixed so that it factually represents the comic. Jeopardy is in the wrong spot with just a quick glance which is all I have time for here at work. 220.127.116.11 16:58, 24 August 2017 (UTC)