Editing Talk:2429: Exposure Models
Please sign your posts with ~~~~ |
Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.
The edit can be undone.
Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision | Your text | ||
Line 17: | Line 17: | ||
Note that alignment sounds like if the AIs end up being evil. They wouldn't be evil. They would be just fulfilling their purpose. Ignoring anything they don't have in program. So, it's kinda dangerous if we don't train the machine to be careful and not kill someone just because we don't know how it could do it ... -- [[User:Hkmaly|Hkmaly]] ([[User talk:Hkmaly|talk]]) 02:31, 26 February 2021 (UTC) | Note that alignment sounds like if the AIs end up being evil. They wouldn't be evil. They would be just fulfilling their purpose. Ignoring anything they don't have in program. So, it's kinda dangerous if we don't train the machine to be careful and not kill someone just because we don't know how it could do it ... -- [[User:Hkmaly|Hkmaly]] ([[User talk:Hkmaly|talk]]) 02:31, 26 February 2021 (UTC) | ||
− | |||
− | |||
− | |||
− |