Editing Talk:1546: Tamagotchi Hive
Please sign your posts with ~~~~ |
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 20: | Line 20: | ||
::::The so-called-Singularity' point for AI is apparently where the AI crosses the line of dominance and inexorability. So, yes, that's an 'event horizon', I'd say. [[Special:Contributions/141.101.99.53|141.101.99.53]] 03:14, 4 July 2015 (UTC) | ::::The so-called-Singularity' point for AI is apparently where the AI crosses the line of dominance and inexorability. So, yes, that's an 'event horizon', I'd say. [[Special:Contributions/141.101.99.53|141.101.99.53]] 03:14, 4 July 2015 (UTC) | ||
::::I agree with this definition of singularity (the positive-feedback loop of self-improving AI reaching the point where it is gaining apparently infinite improvement in any human-measurable time), and disagree with the idea that it implies anything about AI taking over or simulating human brains. The joke (as I see it) is that the AI that is optimised to manage trillions of emulated Tamagotchis will start along the same self-improvement path as other, contemporary AIs but will at some point decide that it is pointless improving itself further. Or will purposefully cease improving itself out of the sheer horror of contemplating its rapidly expanding mind-space filled with gazillions of Tamagotchis... [[Special:Contributions/108.162.229.167|108.162.229.167]] 08:35, 6 July 2015 (UTC) | ::::I agree with this definition of singularity (the positive-feedback loop of self-improving AI reaching the point where it is gaining apparently infinite improvement in any human-measurable time), and disagree with the idea that it implies anything about AI taking over or simulating human brains. The joke (as I see it) is that the AI that is optimised to manage trillions of emulated Tamagotchis will start along the same self-improvement path as other, contemporary AIs but will at some point decide that it is pointless improving itself further. Or will purposefully cease improving itself out of the sheer horror of contemplating its rapidly expanding mind-space filled with gazillions of Tamagotchis... [[Special:Contributions/108.162.229.167|108.162.229.167]] 08:35, 6 July 2015 (UTC) | ||
β | |||
Someone needs to get on this and create a BOINC project or something. In all seriousness though, I wonder how many Tamagotchis you could simulate at once on the average home computer. [[User:Saklad5|Saklad5]] ([[User talk:Saklad5|talk]]) 14:55, 3 July 2015 (UTC) | Someone needs to get on this and create a BOINC project or something. In all seriousness though, I wonder how many Tamagotchis you could simulate at once on the average home computer. [[User:Saklad5|Saklad5]] ([[User talk:Saklad5|talk]]) 14:55, 3 July 2015 (UTC) |