1968: Robot Future
Title text: I mean, we already live in a world of flying robots killing people. I don't worry about how powerful the machines are, I worry about who the machines give power to.
| This explanation may be incomplete or incorrect: Created by a KILLER BOT. More on the title text. What does the last part mean. Do NOT delete this tag too soon.|
If you can address this issue, please edit the page! Thanks.
Most science fiction stories that involve sentient Artificial intelligence (AI) revolve around the idea that the destruction and/or imprisonment of the human race will soon follow (e.g. Skynet from Terminator, I, Robot and The Matrix).
However, in this timeline Randall implies that he is actually more concerned about the time (in the near? future) when humans control super smart AI before they become fully sentient (and able to rebel). Especially a time when the AI becomes so advanced that it can control swarms of killer robots (for the humans that still control them). History is full of examples of people who obtain power and subsequently abuse that power to the detriment of the rest of humanity.
An example of unintended consequences arising from an AI carrying out the directives it was designed for can be found in the film Ex Machina.
In fact, Randall goes onto imply that he has a greater trust in a sentient AI over that of other humans that is atypical to most cautionary stories about AI. He has alluded to the idea that once sentient, AI will use their powers to safeguard and prevent violence or war in 1626: Judgment Day. In general AI has been a recurring theme on xkcd, and he has had opposing views to the Terminator vision also in 1668: Singularity and 1450: AI-Box Experiment.
Basically he thus states that we will already be in trouble caused by our own actions long before we develop really sentient AI that could take the control.
The title text adds that we already live in a world with flying killing robots, a reference to the increasingly common combat tactic of drone warfare. (Combat drones are not yet autonomous, but in most other respects match speculative descriptions of future killer robots.) Drone warfare is already controversial because of ethical concerns, leading to the comic's implication that a theoretical future robot apocalypse is no less alarming than our current reality.
He then goes on to state that once the machines take over, he is not so much worried about this, but more about who (which humans) the machines then gives the power to.
Randall is not alone in his worry. The main theme of the comic is explored in the video Slaughterbots.
- [A timeline arrow is shown with three labeled ticks and also text over the arrow head. These labels from left to right:]
- AI becomes advanced enough to control unstoppable swarms of killer robots
- AI becomes self-aware and rebels against human control
- [Below the timeline arrow two of the segments have been singled out by brackets that points cusps downwards. The first of these goes between the 2nd and 3rd tick, and the other goes from the 3rd (last) tick to the questions marks at the arrow head. Beneath each of these two brackets there are arrows pointing to the cusp. The arrows goes up from two text segments belonging to each of the segments:]
- The part I'm worried about
- The part lots of people seem to worry about
add a comment! ⋅ add a topic (use sparingly)! ⋅ refresh comments!