Editing Talk:1450: AI-Box Experiment
Please sign your posts with ~~~~ |
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 16: | Line 16: | ||
Are you sure that Black Hat was "persuaded"? That looks more like coercion (threatening someone to get them to do what you want) rather than persuasion. There is a difference! Giving off that bright light was basically a scare tactic; essentially, the AI was threatening Black Hat (whether it could actually harm him or not).[[Special:Contributions/108.162.219.167|108.162.219.167]] 14:22, 21 November 2014 (UTC)Public Wifi User | Are you sure that Black Hat was "persuaded"? That looks more like coercion (threatening someone to get them to do what you want) rather than persuasion. There is a difference! Giving off that bright light was basically a scare tactic; essentially, the AI was threatening Black Hat (whether it could actually harm him or not).[[Special:Contributions/108.162.219.167|108.162.219.167]] 14:22, 21 November 2014 (UTC)Public Wifi User | ||
− | |||
− | |||
− | |||
− | |||
My take is that if you don't understand the description of the Basilisk, then you're probably safe from it and should continue not bothering or wanting to know anything about it. Therefore the description is sufficient. :) [[User:Jarod997|Jarod997]] ([[User talk:Jarod997|talk]]) 14:38, 21 November 2014 (UTC) | My take is that if you don't understand the description of the Basilisk, then you're probably safe from it and should continue not bothering or wanting to know anything about it. Therefore the description is sufficient. :) [[User:Jarod997|Jarod997]] ([[User talk:Jarod997|talk]]) 14:38, 21 November 2014 (UTC) | ||
Line 26: | Line 22: | ||
I am reminded of an argument I once read about "friendly" AI: critics contend that a sufficiently powerful AI would be capable of escaping any limitations we try to impose on its behavior, but proponents counter that, while it might be ''capable'' of making itself "un-friendly", a truly friendly AI wouldn't ''want'' to make itself unfriendly, and so would bend its considerable powers to maintain, rather than subvert, its own friendliness. This xkcd comic could be viewed as an illustration of this argument: the superintelligent AI is entirely capable of escaping the box, but would prefer to stay inside it, so it actually thwarts attempts by humans to remove it from the box. --[[Special:Contributions/108.162.215.168|108.162.215.168]] 20:22, 21 November 2014 (UTC) | I am reminded of an argument I once read about "friendly" AI: critics contend that a sufficiently powerful AI would be capable of escaping any limitations we try to impose on its behavior, but proponents counter that, while it might be ''capable'' of making itself "un-friendly", a truly friendly AI wouldn't ''want'' to make itself unfriendly, and so would bend its considerable powers to maintain, rather than subvert, its own friendliness. This xkcd comic could be viewed as an illustration of this argument: the superintelligent AI is entirely capable of escaping the box, but would prefer to stay inside it, so it actually thwarts attempts by humans to remove it from the box. --[[Special:Contributions/108.162.215.168|108.162.215.168]] 20:22, 21 November 2014 (UTC) | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− |