Editing Talk:1450: AI-Box Experiment
![]() |
Please sign your posts with ~~~~ |
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 18: | Line 18: | ||
: What would "persuasion by a super-intelligent AI" look like? Randall presumably doesn't have a way to formulate an actual super-intelligent argument to write into the comic. Glowy special effects are often used as a visual shorthand for "and then a miracle occurred". --[[Special:Contributions/108.162.215.168|108.162.215.168]] 20:43, 21 November 2014 (UTC) | : What would "persuasion by a super-intelligent AI" look like? Randall presumably doesn't have a way to formulate an actual super-intelligent argument to write into the comic. Glowy special effects are often used as a visual shorthand for "and then a miracle occurred". --[[Special:Contributions/108.162.215.168|108.162.215.168]] 20:43, 21 November 2014 (UTC) | ||
− | |||
− | |||
My take is that if you don't understand the description of the Basilisk, then you're probably safe from it and should continue not bothering or wanting to know anything about it. Therefore the description is sufficient. :) [[User:Jarod997|Jarod997]] ([[User talk:Jarod997|talk]]) 14:38, 21 November 2014 (UTC) | My take is that if you don't understand the description of the Basilisk, then you're probably safe from it and should continue not bothering or wanting to know anything about it. Therefore the description is sufficient. :) [[User:Jarod997|Jarod997]] ([[User talk:Jarod997|talk]]) 14:38, 21 November 2014 (UTC) | ||
Line 26: | Line 24: | ||
I am reminded of an argument I once read about "friendly" AI: critics contend that a sufficiently powerful AI would be capable of escaping any limitations we try to impose on its behavior, but proponents counter that, while it might be ''capable'' of making itself "un-friendly", a truly friendly AI wouldn't ''want'' to make itself unfriendly, and so would bend its considerable powers to maintain, rather than subvert, its own friendliness. This xkcd comic could be viewed as an illustration of this argument: the superintelligent AI is entirely capable of escaping the box, but would prefer to stay inside it, so it actually thwarts attempts by humans to remove it from the box. --[[Special:Contributions/108.162.215.168|108.162.215.168]] 20:22, 21 November 2014 (UTC) | I am reminded of an argument I once read about "friendly" AI: critics contend that a sufficiently powerful AI would be capable of escaping any limitations we try to impose on its behavior, but proponents counter that, while it might be ''capable'' of making itself "un-friendly", a truly friendly AI wouldn't ''want'' to make itself unfriendly, and so would bend its considerable powers to maintain, rather than subvert, its own friendliness. This xkcd comic could be viewed as an illustration of this argument: the superintelligent AI is entirely capable of escaping the box, but would prefer to stay inside it, so it actually thwarts attempts by humans to remove it from the box. --[[Special:Contributions/108.162.215.168|108.162.215.168]] 20:22, 21 November 2014 (UTC) | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− |