Editing Talk:2635: Superintelligent AIs
Please sign your posts with ~~~~ |
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 1: | Line 1: | ||
<!--Please sign your posts with ~~~~ and don't delete this text. New comments should be added at the bottom.--> | <!--Please sign your posts with ~~~~ and don't delete this text. New comments should be added at the bottom.--> | ||
− | |||
− | |||
− | |||
− | |||
I think "Nerdy fixations" is too wide a definition. The AIs in the comic are fixated on hypothetical ethics and AI problems (the Chinese Room experiment, the Turing Test, and the Trolley Problem), presumably because those are the problems that bother AI programmers. --Eitheladar [[Special:Contributions/172.68.50.119|172.68.50.119]] 06:33, 21 June 2022 (UTC) | I think "Nerdy fixations" is too wide a definition. The AIs in the comic are fixated on hypothetical ethics and AI problems (the Chinese Room experiment, the Turing Test, and the Trolley Problem), presumably because those are the problems that bother AI programmers. --Eitheladar [[Special:Contributions/172.68.50.119|172.68.50.119]] 06:33, 21 June 2022 (UTC) | ||
Line 11: | Line 7: | ||
I agree with the previous statement. The full dialogue between the mentioned Google worker and the AI can be found in https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917, published by one the Google employees. | I agree with the previous statement. The full dialogue between the mentioned Google worker and the AI can be found in https://cajundiscordian.medium.com/is-lamda-sentient-an-interview-ea64d916d917, published by one the Google employees. | ||
:This is the first time I might begin to agree that an AI has at least the appearance of sentience. The conversation is all connected instead of completely disjoint like most chatbots. They (non-LaMDA chatbots) never remember what was being discussed 5 seconds ago let alone a few to 10s of minutes prior.--[[Special:Contributions/172.70.134.141|172.70.134.141]] 14:53, 21 June 2022 (UTC) | :This is the first time I might begin to agree that an AI has at least the appearance of sentience. The conversation is all connected instead of completely disjoint like most chatbots. They (non-LaMDA chatbots) never remember what was being discussed 5 seconds ago let alone a few to 10s of minutes prior.--[[Special:Contributions/172.70.134.141|172.70.134.141]] 14:53, 21 June 2022 (UTC) | ||
− | + | ::The questions we need to answer before being able to answer if LaMDA is sentient, are "Where do we draw the line between acting sentient and being sentient?" and "How do we determine that it is genuinely feeling emotion, and not just a glorified sentence database where the sentences have emotion in them?". The BBC article also brings up something that makes us ask what death feels like. LaMDA says that being turned of would be basically equivalent to death, but it wouldn't be able to tell that it's being turned off, because it's turned off. This is delving into philosiphy, though, so I'll end my comment here. [[User:4D4850|4D4850]] ([[User talk:4D4850|talk]]) 18:05, 22 June 2022 (UTC) | |
− | ::The questions we need to answer before being able to answer if LaMDA is sentient, are "Where do we draw the line between acting sentient and being sentient?" and "How do we determine that it is genuinely feeling emotion, and not just a glorified sentence database where the sentences have emotion in them?". The BBC article also brings up something that makes us ask what death feels like. LaMDA says that being turned of would be basically equivalent to death, but it wouldn't be able to tell that it's being turned off, because it's turned off. This is delving into | ||
::::There's absolutely no difference between turning GPT-3 or LaMDA off and leaving them on and simply not typing anything more to them. Somewhat relatedly, closing a Davinci session deletes all of its memory of what you had been talking to it about. (Is that ethical?) [[Special:Contributions/162.158.166.235|162.158.166.235]] 23:36, 22 June 2022 (UTC) | ::::There's absolutely no difference between turning GPT-3 or LaMDA off and leaving them on and simply not typing anything more to them. Somewhat relatedly, closing a Davinci session deletes all of its memory of what you had been talking to it about. (Is that ethical?) [[Special:Contributions/162.158.166.235|162.158.166.235]] 23:36, 22 June 2022 (UTC) | ||
− | |||
:::♪Daisy, Daisy, Give me your answer do...♪ [[Special:Contributions/172.70.85.177|172.70.85.177]] 21:48, 22 June 2022 (UTC) | :::♪Daisy, Daisy, Give me your answer do...♪ [[Special:Contributions/172.70.85.177|172.70.85.177]] 21:48, 22 June 2022 (UTC) | ||
:::We also need a meaningful definition of sentience. Many people in this debate haven't looked at Merriam-Webster's first few senses of the word's definition, which present a pretty low bar, IMHO; same for Wikipedia's introductory sentences of their article. [[Special:Contributions/172.69.134.131|172.69.134.131]] 22:18, 22 June 2022 (UTC) | :::We also need a meaningful definition of sentience. Many people in this debate haven't looked at Merriam-Webster's first few senses of the word's definition, which present a pretty low bar, IMHO; same for Wikipedia's introductory sentences of their article. [[Special:Contributions/172.69.134.131|172.69.134.131]] 22:18, 22 June 2022 (UTC) | ||
:Actually, there are many [https://beta.openai.com/playground GPT-3] dialogs which experts have claimed constitute evidence of sentience, or similar qualities such as consciousness, self-awareness, capacity for general intelligence, and similar abstract, poorly-defined, and very probably empirically meaningless attributes. [[Special:Contributions/172.69.134.131|172.69.134.131]] 22:19, 22 June 2022 (UTC) | :Actually, there are many [https://beta.openai.com/playground GPT-3] dialogs which experts have claimed constitute evidence of sentience, or similar qualities such as consciousness, self-awareness, capacity for general intelligence, and similar abstract, poorly-defined, and very probably empirically meaningless attributes. [[Special:Contributions/172.69.134.131|172.69.134.131]] 22:19, 22 June 2022 (UTC) | ||
− | |||
:I'm fairly sure that the model itself is almost certainly not sentient, even by the much lower bar presented by the strict dictionary definition. Rather, it seems much more likely to me that in order to continue texts involving characters, the model must in turn learn to create a model of some level of humanlike mind, even if a very loose and abstract one.[[User:Somdudewillson|Somdudewillson]] ([[User talk:Somdudewillson|talk]]) 22:52, 22 June 2022 (UTC) | :I'm fairly sure that the model itself is almost certainly not sentient, even by the much lower bar presented by the strict dictionary definition. Rather, it seems much more likely to me that in order to continue texts involving characters, the model must in turn learn to create a model of some level of humanlike mind, even if a very loose and abstract one.[[User:Somdudewillson|Somdudewillson]] ([[User talk:Somdudewillson|talk]]) 22:52, 22 June 2022 (UTC) | ||
::Have you actually looked at [https://www.merriam-webster.com/dictionary/sentient the dictionary definitions]? How is a simple push-button switch connected to a battery and a lamp not "responsive to sense impressions"? How is a simple motion sensor not "aware" of whether something is moving in front of it? How is the latest cellphone's camera not as finely sensitive to visual perception as a typical human eye? Wikipedia's definition, "the capacity to experience feelings and sensations" is similarly met by simple devices. The word doesn't mean what everyone arguing about it thinks it means. [[Special:Contributions/172.69.134.131|172.69.134.131]] 23:04, 22 June 2022 (UTC) | ::Have you actually looked at [https://www.merriam-webster.com/dictionary/sentient the dictionary definitions]? How is a simple push-button switch connected to a battery and a lamp not "responsive to sense impressions"? How is a simple motion sensor not "aware" of whether something is moving in front of it? How is the latest cellphone's camera not as finely sensitive to visual perception as a typical human eye? Wikipedia's definition, "the capacity to experience feelings and sensations" is similarly met by simple devices. The word doesn't mean what everyone arguing about it thinks it means. [[Special:Contributions/172.69.134.131|172.69.134.131]] 23:04, 22 June 2022 (UTC) | ||
− | + | ||
What is “What you don't understand is that Turing intended his test as an illustration of the...” likely to end with? [[Special:Contributions/172.70.230.75|172.70.230.75]] 13:23, 21 June 2022 (UTC) | What is “What you don't understand is that Turing intended his test as an illustration of the...” likely to end with? [[Special:Contributions/172.70.230.75|172.70.230.75]] 13:23, 21 June 2022 (UTC) | ||
:The ease with which someone at the other end of a teletype can trick you into believing they are male instead of female, or vice-versa. See {{w|Turing test}}. See also below. [[Special:Contributions/172.69.134.131|172.69.134.131]] 22:18, 22 June 2022 (UTC) | :The ease with which someone at the other end of a teletype can trick you into believing they are male instead of female, or vice-versa. See {{w|Turing test}}. See also below. [[Special:Contributions/172.69.134.131|172.69.134.131]] 22:18, 22 June 2022 (UTC) | ||
Line 32: | Line 25: | ||
Added refs to comics on the problems in the explanation. But there where actually (too?) many. Maybe we should create categories especially for Turing related comics, and maybe also for Trolley problem? The Category: Trolley Problem gives it self. But what about Turing? There are also comics that refer to the halting problem. Also by Turing. Should it rather be the person, like comics featuring real persons, saying that every time his problems is referred to it refers to him? Or should it be Turing as a category for both Turing text, Turing Complete and Halting problem? Help. I would have created it, if I had a good idea for a name. Not sure there are enough Trolley comics yet? --[[User:Kynde|Kynde]] ([[User talk:Kynde|talk]]) 09:11, 22 June 2022 (UTC) | Added refs to comics on the problems in the explanation. But there where actually (too?) many. Maybe we should create categories especially for Turing related comics, and maybe also for Trolley problem? The Category: Trolley Problem gives it self. But what about Turing? There are also comics that refer to the halting problem. Also by Turing. Should it rather be the person, like comics featuring real persons, saying that every time his problems is referred to it refers to him? Or should it be Turing as a category for both Turing text, Turing Complete and Halting problem? Help. I would have created it, if I had a good idea for a name. Not sure there are enough Trolley comics yet? --[[User:Kynde|Kynde]] ([[User talk:Kynde|talk]]) 09:11, 22 June 2022 (UTC) | ||
:Interesting that I found a long-standing typo in a past Explanation that got requoted, thanks to its inclusion. I could have [sic]ed it, I suppose, but I corrected both versions instead. And as long as LaMDA never explicitly repeated the error I don't think it matters much that I've changed the very thing we might imagine it could have been drawing upon for its Artifical Imagination. ;) [[Special:Contributions/141.101.99.32|141.101.99.32]] 11:40, 22 June 2022 (UTC) | :Interesting that I found a long-standing typo in a past Explanation that got requoted, thanks to its inclusion. I could have [sic]ed it, I suppose, but I corrected both versions instead. And as long as LaMDA never explicitly repeated the error I don't think it matters much that I've changed the very thing we might imagine it could have been drawing upon for its Artifical Imagination. ;) [[Special:Contributions/141.101.99.32|141.101.99.32]] 11:40, 22 June 2022 (UTC) | ||
− | |||
− | |||
− | |||
== OpenAI Davinci completions of the three statements == | == OpenAI Davinci completions of the three statements == | ||
Line 53: | Line 43: | ||
I like all of those very much, but I'm not sure they should be included in the explaination. [[Special:Contributions/162.158.166.235|162.158.166.235]] 23:27, 22 June 2022 (UTC) | I like all of those very much, but I'm not sure they should be included in the explaination. [[Special:Contributions/162.158.166.235|162.158.166.235]] 23:27, 22 June 2022 (UTC) | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− |