Editing 1968: Robot Future

Jump to: navigation, search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 8: Line 8:
  
 
==Explanation==
 
==Explanation==
Most science fiction stories that involve sentient {{w|Artificial intelligence}} (AI) revolve around the idea that the destruction and/or imprisonment of the human race will soon follow (e.g. Skynet from ''{{w|Terminator (franchise)|Terminator}}'', ''{{w|I, Robot}}'' and ''{{w|The Matrix (franchise)|The Matrix}}''). However, in this timeline [[Randall]] implies that he is actually more concerned about the time (in the near? future) when humans control super smart AI before they become fully sentient (and able to rebel). Especially a time when the AI becomes so advanced that it can control swarms of killer robots (for the humans that still control them). History is full of examples of people who obtain power and subsequently abuse that power to the detriment of the rest of humanity.
 
  
An example of unintended consequences arising from an AI carrying out the directives it was designed for can be found in the film ''{{w|Ex Machina (film)|Ex Machina}}''. In fact, Randall goes on to imply that he has a greater trust in a sentient AI over that of other humans that is atypical to most cautionary stories about AI. He has alluded to the idea that once sentient, AI will use their powers to safeguard and prevent violence or war in [[1626: Judgment Day]]. In general AI has been a [[:Category:Artificial Intelligence|recurring theme]] on xkcd, and he has had opposing views to the Terminator vision also in [[1668: Singularity]] and [[1450: AI-Box Experiment]]. Basically, he thus states that we will already be in trouble caused by our own actions long before we develop really sentient AI that could take the control.
+
Most science fiction stories that involve sentient {{w|Artificial intelligence}} (AI) revolve around the idea that the destruction and/or imprisonment of the human race will soon follow (e.g. Skynet from ''{{w|Terminator (franchise)|Terminator}}'', ''{{w|I, Robot}}'' and ''{{w|The Matrix (franchise)|The Matrix}}'').  
  
The title text adds that we already live in a world with flying killing robots, a reference to the increasingly common combat tactic of {{w|Unmanned_combat_aerial_vehicle|drone warfare}}. (Combat drones are not yet autonomous, but in most other respects match speculative descriptions of future killer robots.) Drone warfare is already controversial because of ethical concerns, leading to the comic's implication that a theoretical future robot apocalypse is no less alarming than our current reality. He then goes on to state that once the machines take over, he is not so much worried about this, but more about who (which humans) the machines then give the power to. Randall is not alone in his worry. The main theme of the comic is explored in the video [https://www.youtube.com/watch?v=9CO6M2HsoIA Slaughterbots].
+
However, in this timeline [[Randall]] implies that he is actually more concerned about the time (in the near? future) when humans control super smart AI before they become fully sentient (and able to rebel). Especially a time when the AI becomes so advanced that it can control swarms of killer robots (for the humans that still control them). History is full of examples of people who obtain power and subsequently abuse that power to the detriment of the rest of humanity.
  
In 2015 an {{w|Open Letter on Artificial Intelligence}} was signed by several people including [[Elon Musk]] and {{w|Stephen Hawking}}. The letter warned about the risk of creating something that cannot be controlled, and thus belongs to the worry at the end of the timeline in this comic. Both Elon Musk and Stephen Hawking has been featured in xkcd. Stephen appeared in [[799: Stephen Hawking]]). Stephen Hawking has kept [https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html warning about this danger] all the way up to shortly before his death, which occurred on 2018-03-14 two days before the release of this comic.  
+
An example of unintended consequences arising from an AI carrying out the directives it was designed for can be found in the film ''{{w|Ex Machina (film)|Ex Machina}}''.
  
It could be a coincidence, and it is not a [[:Category:Tribute|Tribute]], but still interesting that the first xkcd comic released after Stephen Hawking's death is directly related to his fears, although Randall demonstrate that he worries about earlier potential problems with AI, than those that Stephen Hawking fear could transpire if an AI becomes self-aware.
+
In fact, Randall goes on to imply that he has a greater trust in a sentient AI over that of other humans that is atypical to most cautionary stories about AI. He has alluded to the idea that once sentient, AI will use their powers to safeguard and prevent violence or war in [[1626: Judgment Day]]. In general AI has been a [[:Category:Artificial Intelligence|recurring theme]] on xkcd, and he has had opposing views to the Terminator vision also in [[1668: Singularity]] and [[1450: AI-Box Experiment]].
 +
 
 +
Basically he thus states that we will already be in trouble caused by our own actions long before we develop really sentient AI that could take the control.
 +
 
 +
The title text adds that we already live in a world with flying killing robots, a reference to the increasingly common combat tactic of {{w|Unmanned_combat_aerial_vehicle|drone warfare}}. (Combat drones are not yet autonomous, but in most other respects match speculative descriptions of future killer robots.) Drone warfare is already controversial because of ethical concerns, leading to the comic's implication that a theoretical future robot apocalypse is no less alarming than our current reality.
 +
 
 +
He then goes on to state that once the machines take over, he is not so much worried about this, but more about who (which humans) the machines then give the power to.
 +
 
 +
Randall is not alone in his worry. The main theme of the comic is explored in the video [https://www.youtube.com/watch?v=9CO6M2HsoIA Slaughterbots].
 +
 
 +
In 2015 an {{w|Open Letter on Artificial Intelligence}} was signed by several people including {{w|Elon Musk}} and {{w|Stephen Hawking}}. The letter warned about the risk of creating something that cannot be controlled, and thus belongs to the worry at the end of the timeline in this comic. Both Elon Musk and Stephen Hawking has been featured in xkcd. (Elon has a [[:Category:Comics featuring Elon Musk|Category]] and Stephen appeared in [[799: Stephen Hawking]]).
 +
 
 +
Stephen Hawking has kept [https://www.cnbc.com/2017/11/06/stephen-hawking-ai-could-be-worst-event-in-civilization.html warning about this danger] all the way up to shortly before his death, which occurred on 2018-03-14 two days before the release of this comic.
 +
 
 +
It could be a coincidence, and it is not a [[:Category:Tribute|Tribute]], but still interesting that the first xkcd comic released after Stephen Hawking's death is directly related to his fears, although Randall demonstrate that he worries about earlier potential problems with AI, than those that Stephen Hawking fear could transpire if an AI becomes self aware.
  
 
==Transcript==
 
==Transcript==
Line 28: Line 41:
 
:The part I'm worried about
 
:The part I'm worried about
 
:The part lots of people seem to worry about
 
:The part lots of people seem to worry about
 +
  
 
{{comic discussion}}
 
{{comic discussion}}

Please note that all contributions to explain xkcd may be edited, altered, or removed by other contributors. If you do not want your writing to be edited mercilessly, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource (see explain xkcd:Copyrights for details). Do not submit copyrighted work without permission!

To protect the wiki against automated edit spam, we kindly ask you to solve the following CAPTCHA:

Cancel | Editing help (opens in new window)