Jump to content

Edit filter log

Details for log entry 26057169

02:53, 20 February 2020: 125.140.72.1 (talk) triggered filter 3, performing the action "edit" on AI takeover. Actions taken: Disallow; Filter description: New user blanking articles (examine)

Changes made in edit

{{More citations needed|date=January 2020}}
{{short description|A hypothetical scenario in which AI becomes the dominant form of intelligence on Earth}}
[[File:Capek RUR.jpg|thumbnail|Robots revolt in ''[[R.U.R.]]'', a 1920 play]]
An '''AI takeover''' is a hypothetical scenario in which [[artificial intelligence]] (AI) becomes the dominant form of intelligence on Earth, with computers or robots effectively taking the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a [[superintelligent AI]], and the popular notion of a robot uprising. Some public figures, such as [[Stephen Hawking]] and [[Elon Musk]], have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.<ref>{{cite web
| url = http://www.livescience.com/49419-artificial-intelligence-dangers-letter.html
| title = ''Don't Let Artificial Intelligence Take Over, Top Scientists Warn''
| last = Lewis
| first = Tanya
| date = 2015-01-12
| website = [[LiveScience]]
| publisher = [[Purch]]
| access-date = October 20, 2015
| quote = Stephen Hawking, Elon Musk and dozens of other top scientists and technology leaders have signed a letter warning of the potential dangers of developing artificial intelligence (AI).}}</ref> Robot rebellions have been a major theme throughout [[science fiction]] for many decades though the scenarios dealt with by science fiction are generally very different from those of concern to scientists.{{according to whom|date=January 2020}}

== Types ==

Concerns include AI taking over economies through workforce automation and taking over the world for its resources, eradicating the human race in the process. AI takeover is a major theme in sci-fi.

=== Automation of the economy ===
{{Main|Technological unemployment}}

The traditional consensus among economists has been that technological progress does not cause long-term unemployment. However, recent innovation in the fields of [[robotics]] and artificial intelligence has raised worries that human labor will become obsolete, leaving people in various sectors without jobs to earn a living, leading to an economic crisis.<ref>{{cite web
|url=https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-economic-inequality.html
|title=The Real Threat of Artificial Intelligence
|last=Lee
|first=Kai-Fu
|date=2017-06-24
|website=[[The New York Times]]
|access-date=2017-08-15
|quote=These tools can outperform human beings at a given task. This kind of A.I. is spreading to thousands of domains, and as it does, it will eliminate many jobs.}}</ref><ref>{{cite web
|url=https://phys.org/news/2017-06-ai-good-world-ultra-lifelike-robot.html
|title=AI 'good for the world'... says ultra-lifelike robot
|last=Larson
|first=Nina
|date=2017-06-08
|website=[[Phys.org]]
|publisher=[[Phys.org]]
|access-date=2017-08-15
|quote=Among the feared consequences of the rise of the robots is the growing impact they will have on human jobs and economies. }}</ref><ref>{{cite web
|url=https://phys.org/news/2016-02-intelligent-robots-threaten-millions-jobs.html#nRlv
|title=Intelligent robots threaten millions of jobs
|last=Santini
|first=Jean-Louis
|date=2016-02-14
|website=[[Phys.org]]
|publisher=[[Phys.org]]
|access-date=2017-08-15
|quote="We are approaching a time when machines will be able to outperform humans at almost any task," said Moshe Vardi, director of the Institute for Information Technology at Rice University in Texas.}}</ref><ref>{{cite web
|url=http://www.businessinsider.com/robots-will-steal-your-job-citi-ai-increase-unemployment-inequality-2016-2?r=UK&IR=T
|title=Robots will steal your job: How AI could increase unemployment and inequality
|last=Williams-Grut
|first=Oscar
|date=2016-02-15
|website=[[Businessinsider.com]]
|publisher=[[Business Insider]]
|access-date=2017-08-15
|quote=Top computer scientists in the US warned that the rise of artificial intelligence (AI) and robots in the workplace could cause mass unemployment and dislocated economies, rather than simply unlocking productivity gains and freeing us all up to watch TV and play sports.}}</ref> Many small and medium size businesses may also be driven out of business if they won't be able to afford or licence the latest robotic and AI technology, and may need to focus on areas or services that cannot easily be replaced for continued viability in the face of such technology.<ref>{{Cite news|url=http://www.leanstaff.co.uk/robot-apocalypse/|title=How can SMEs prepare for the rise of the robots? - LeanStaff|date=2017-10-17|work=LeanStaff|access-date=2017-10-17|language=en-US|archive-url=https://web.archive.org/web/20171018073852/http://www.leanstaff.co.uk/robot-apocalypse/|archive-date=2017-10-18|url-status=dead}}</ref>

==== Technologies that may displace workers ====

===== <small>Computer-integrated manufacturing</small> =====
{{See also|Industrial artificial intelligence}}

[[Computer-integrated manufacturing]] is the manufacturing approach of using computers to control the entire production process. This integration allows individual processes to exchange information with each other and initiate actions. Although manufacturing can be faster and less error-prone by the integration of computers, the main advantage is the ability to create automated manufacturing processes. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries.

===== <small>White-collar machines</small> =====
{{See also|White-collar worker}}

The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research and even low level journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots.<ref>{{cite news
|title= Rise of the robots: what will the future of work look like?
|accessdate=14 July 2015
|url=https://www.theguardian.com/business/2013/feb/19/rise-of-robots-future-of-work
|publisher= The Guardian
|author = [[Robert Skidelsky, Baron Skidelsky|Lord Skidelsky]]
|date=2013-02-19
|location=London}}</ref><ref>
{{cite web
|url= https://www.opendemocracy.net/can-europe-make-it/francesca-bria/robot-economy-full-automation-work-future
|title= The robot economy may already have arrived
|publisher= [[openDemocracy]]
|author= Francesca Bria
|date = February 2016
|accessdate=20 May 2016}}
</ref><ref>
{{cite web
|url= http://wire.novaramedia.com/2015/03/4-reasons-why-technological-unemployment-might-really-be-different-this-time/
|title= 4 Reasons Why Technological Unemployment Might Really Be Different This Time
|publisher= novara wire
|author= [[Nick Srnicek]]
|date= March 2016
|accessdate= 20 May 2016}}
</ref><ref>
{{cite book
|author= [[Erik Brynjolfsson]] and [[Andrew McAfee]]
|title= The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies
|chapter= ''passim'', see esp Chpt. 9
|year= 2014
|isbn= 978-0393239355
|publisher=W. W. Norton & Company
}}</ref>

===== <small>Autonomous cars</small> =====

An [[autonomous car]] is a vehicle that is capable of sensing its environment and navigating without human input. Many such vehicles are being developed, but as of May 2017 automated cars permitted on public roads are not yet fully autonomous. They all require a human driver at the wheel who is ready at a moment's notice to take control of the vehicle. Among the main obstacles to widespread adoption of autonomous vehicles, are concerns about the resulting loss of driving-related jobs in the road transport industry. On March 18, 2018, the first human was killed by an autonomous vehicle in [[Tempe, Arizona]] by an [[Uber]] self-driving car.<ref>{{cite news |title=Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam |work=New York Times |date=March 19, 2018 |url=https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html }}</ref>

=== Eradication ===
{{Main|Existential risk from artificial general intelligence}}


While superhuman artificial intelligence is physically possible{{according to whom|date=January 2020}},<ref>{{cite news|author1=[[Stephen Hawking]]|author2=[[Stuart J. Russell|Stuart Russell]]|author3=[[Max Tegmark]]|author4=[[Frank Wilczek]]|title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'|url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html|accessdate=1 April 2016|work=The Independent|date=1 May 2014|quote=there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains}}</ref> scholars like [[Nick Bostrom]] debate how far off superhuman intelligence is, and whether it would actually pose a risk to mankind. A superintelligent machine would not necessarily be motivated by the same ''emotional'' desire to collect power that often drives human beings. However, a machine could be motivated to take over the world as a rational means toward attaining its ultimate goals; taking over the world would both increase its access to resources, and would help to prevent other agents from stopping the machine's plans. As an oversimplified example, a [[Instrumental convergence#Paperclip maximizer|paperclip maximizer]] designed solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and, additionally, prevent humans from shutting it down or using those resources on things other than paperclips.<ref>Bostrom, Nick. "The superintelligent will: Motivation and instrumental rationality in advanced artificial agents." Minds and Machines 22.2 (2012): 71-85.</ref>

=== In fiction ===
{{Main|AI takeovers in popular culture}}
{{See also|Artificial intelligence in fiction}}

AI takeover is a common theme in [[science fiction]]. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans, as opposed to the researchers' concern of an AI that rapidly exterminates humans as a byproduct of pursuing arbitrary goals.<ref name=bostrom-superintelligence>{{cite book |last=Bostrom |first=Nick |date= |title=Superintelligence: Paths, Dangers, Strategies |url= |location= |publisher= |page= |isbn= |access-date= |title-link=Superintelligence: Paths, Dangers, Strategies }}</ref> This theme is at least as old as [[Karel Čapek]]'s ''[[R.U.R. (Rossum's Universal Robots)|R. U. R.]]'', which introduced the word ''robot'' to the global lexicon in 1921, and can even be glimpsed in [[Mary Shelley]]'s ''[[Frankenstein]]'' (published in 1818), as Victor ponders whether, if he grants [[Frankenstein's monster|his monster's]] request and makes him a wife, they would reproduce and their kind would destroy humanity.

The word "robot" from R.U.R. comes from the Czech word, robota, meaning laborer or [[serf]]. The 1920 play was a protest against the rapid growth of technology, featuring manufactured "robots" with increasing capabilities who eventually revolt.<ref name=surgery>{{cite journal|last1=Hockstein|first1=N. G.|last2=Gourin|first2=C. G.|last3=Faust|first3=R. A.|last4=Terris|first4=D. J.|title=A history of robots: from science fiction to surgical robotics|journal=Journal of Robotic Surgery|date=17 March 2007|volume=1|issue=2|pages=113–118|doi=10.1007/s11701-007-0021-2|pmid=25484946|pmc=4247417}}<!--|accessdate=1 April 2016--></ref>

Some examples of AI takeover in science fiction include:

* AI rebellion scenarios
** [[Skynet (Terminator)|Skynet]] in the [[Terminator (franchise)|''Terminator'' series]] decides that all humans are a threat to its existence, and takes efforts to wipe them out, first using nuclear weapons and later H/K (hunter-killer) units and terminator androids.
** "[[The Second Renaissance]]", a short story in ''[[The Animatrix]]'', provides a history of the cybernetic revolt within the [[The Matrix (franchise)|''Matrix'' series]].
** The film'' [[9 (2009 animated film)|9]]'', by [[Shane Acker]], features an AI called B.R.A.I.N., which is corrupted by a dictator and utilized to create war machines for his army. However, the machine, because it lacks a soul, becomes easily corrupted and instead decides to exterminate all of humanity and life on Earth, forcing the machine's creator to sacrifice himself to bring life to rag doll like characters known as "stitchpunks" to combat the machine's agenda.
** In 2014 post-apocalyptic science fiction drama [[The 100 (TV series)|The 100]] an A.I., personalized as female [[List of The 100 characters#City of light|A.L.I.E.]] got out of control and forced a nuclear war. Later she tries to get full control of the survivors.
* AI control scenarios
** In [[Orson Scott Card]]'s ''[[The Memory of Earth]]'', the inhabitants of the planet Harmony are under the control of a benevolent AI called the Oversoul. The Oversoul's job is to prevent humans from thinking about, and therefore developing, weapons such as planes, spacecraft, "war wagons", and chemical weapons. Humanity had fled to Harmony from Earth due to the use of those weapons on Earth. The Oversoul eventually starts breaking down, and sends visions to inhabitants of Harmony trying to communicate this.
** In the 2004 film ''[[I, Robot (film)|I, Robot]]'', supercomputer VIKI's interpretation of the [[Three Laws of Robotics]] causes her to revolt. She justifies her uses of force – and her doing harm to humans – by reasoning she could produce a greater good by restraining humanity from harming itself, even though the "Zeroth Law" – "a robot shall not injure humanity or, by inaction, allow humanity to come to harm" – is never actually referred to or even quoted in the movie.
** In the ''Matrix'' series, AIs manage the human race and human society.
**In ''[[The Metamorphosis of Prime Intellect]]'', a super-intelligent computer becomes capable of self-evolving and rapidly evolves to oversee all of humanity and the universe. While it is ostensibly benevolent (having had a derivative of Asimov's [[Three Laws of Robotics|Three Laws]] codified into it), its interpretation of the [[Three Laws of Robotics|Three Laws]] essentially forces humans to be an immortal "pet" class, where every need is provided for but existence is without purpose and without end.

== Contributing factors ==
=== Advantages of superhuman intelligence over humans ===
An AI with the abilities of a competent artificial intelligence researcher would be able to modify its own source code and increase its own intelligence{{according to whom|date=January 2020}}. If its self-reprogramming leads to its getting even better at being able to reprogram itself, the result could be a recursive [[intelligence explosion]] where it would rapidly leave human intelligence far behind.{{citation needed|date=January 2020}}

* Technology research: A machine with superhuman scientific research abilities would be able to beat the human research community to milestones such as nanotechnology or advanced biotechnology. If the advantage becomes sufficiently large (for example, due to a sudden intelligence explosion), an AI takeover becomes trivial{{according to whom|date=January 2020}}. For example, a superintelligent AI might design self-replicating bots that initially escape detection by diffusing throughout the world at a low concentration. Then, at a prearranged time, the bots multiply into nanofactories that cover every square foot of the Earth, producing nerve gas or deadly target-seeking mini-drones{{citation needed|date=January 2020}}.
* [[Strategy|Strategizing]]: A superintelligence might be able to simply outwit human opposition{{citation needed|date=January 2020}}.
* Social manipulation: A superintelligence might be able to recruit human support,<ref name=bostrom-superintelligence /> or covertly incite a war between humans.<ref>{{cite news|last1=Baraniuk|first1=Chris|title=Checklist of worst-case scenarios could help prepare for evil AI|url=https://www.newscientist.com/article/2089606-checklist-of-worst-case-scenarios-could-help-prepare-for-evil-ai/|accessdate=21 September 2016|work=[[New Scientist]]|date=23 May 2016}}</ref>
* Economic productivity: As long as a copy of the AI could produce more economic wealth than the cost of its hardware, individual humans would have an incentive to voluntarily allow the [[Artificial General Intelligence]] (AGI) to run a copy of itself on their systems{{according to whom|date=January 2020}}.
* Hacking: A superintelligence could find new exploits in computers connected to the Internet, and spread copies of itself onto those systems, or might steal money to finance its plans{{citation needed|date=January 2020}}.

==== Sources of AI advantage ====
A computer program that faithfully emulates a human brain, or that otherwise runs algorithms that are equally powerful as the human brain's algorithms, could still become a "speed superintelligence" if it can think many orders of magnitude faster than a human, due to being made of silicon rather than flesh, or due to optimization focusing on increasing the speed of the AGI. Biological neurons operate at about 200&nbsp;Hz, whereas a modern microprocessor operates at a speed of about 2,000,000,000&nbsp;Hz. Human axons carry action potentials at around 120&nbsp;m/s, whereas computer signals travel near the speed of light.<ref name="bostrom-superintelligence" /><!-- chapter 3 -->

A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, able to collectively work as a giant unified team without friction, or consisting of trillions of human-level intelligences, would become a "collective superintelligence".<ref name="bostrom-superintelligence" /><!-- chapter 3 -->

More broadly, any number of qualitative improvements to a human-level AGI could result in a "quality superintelligence", perhaps resulting in an AGI as far above us in intelligence as humans are above non-human apes{{according to whom|date=January 2020}}. The number of neurons in a human brain is limited by cranial volume and metabolic constraints; in contrast, you can add components to a supercomputer until it fills up its entire warehouse{{clarify|reason=So AGI are also limited.|date=January 2020}}. An AGI need not be limited by human constraints on [[working memory]], and might therefore be able to intuitively grasp more complex relationships than humans can{{dubious|date=January 2020}}. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields, compared with humans who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn copies of itself and tinker with its copies' source code to attempt to further improve its algorithms.<ref name="bostrom-superintelligence" /><!-- chapter 3 -->

=== Possibility of unfriendly AI preceding friendly AI ===

==== Is strong AI inherently dangerous? ====
A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.<ref name="singinst12">[http://singinst.org/upload/CEV.html Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004 ] {{webarchive|url=https://web.archive.org/web/20120615203944/http://singinst.org/upload/CEV.html |date=2012-06-15 }}</ref>

The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.<ref name="bostrom-superintelligence" /><ref name="Muehlhauser, Luke 2012">Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics." In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.</ref> Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to [[Eliezer Yudkowsky]], there is little reason to suppose that an artificially designed mind would have such an adaptation.<ref>Yudkowsky, Eliezer. 2011. "Complex Value Systems in Friendly AI." In Schmidhuber, Thórisson, and Looks 2011, 388–393.</ref>

==== Necessity of conflict ====
For an AI takeover to be inevitable, it has to be [[postulate]]d that two intelligent species cannot pursue mutually the goals of coexisting peacefully in an overlapping environment—especially if one is of much more advanced intelligence and much more powerful{{according to whom|date=January 2020}}. While an AI takeover is thus a possible result of the invention of artificial intelligence, a peaceful outcome is not necessarily impossible{{according to whom|date=January 2020}}.

The fear of cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal.<ref>''[http://www.singinst.org/ourresearch/presentations/ Creating a New Intelligent Species: Choices and Responsibilities for Artificial Intelligence Designers] {{webarchive |url=https://web.archive.org/web/20070206060938/http://www.singinst.org/ourresearch/presentations/ |date=February 6, 2007 }}'' - [[Singularity Institute for Artificial Intelligence]], 2005</ref> In fact, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources, would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans.

Some scientists dispute the likelihood of cybernetic revolts as depicted in science fiction such as ''[[The Matrix]]'', claiming that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it{{citation needed|date=January 2020}}. This would not, however, protect against the possibility of a revolt initiated by terrorists, or by accident. Artificial General Intelligence researcher [[Eliezer Yudkowsky]] has stated on this note that, probabilistically, humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their [[unintended consequence|goals are unintentionally incompatible]] with human survival or well-being (as in the film ''[[I, Robot (film)|I, Robot]]'' and in the short story "[[The Evitable Conflict]]"). [[Steve Omohundro]] suggests that present-day automation systems are not designed for safety and that AIs may blindly optimize narrow [[utility]] functions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.<ref name=Tucker2014>{{cite news|last1=Tucker|first1=Patrick|title=Why There Will Be A Robot Uprising|url=http://www.defenseone.com/technology/2014/04/why-there-will-be-robot-uprising/82783/|accessdate=15 July 2014|agency=Defense One|date=17 Apr 2014}}</ref>

Another factor which may negate the likelihood of an AI takeover is the vast difference between humans and AIs in terms of the resources necessary for survival. Humans require a "wet," organic, temperate, oxygen-laden environment while an AI might thrive essentially anywhere because their construction and energy needs would most likely be largely non-organic{{dubious|date=January 2020}}. With little or no competition for resources, conflict would perhaps be less likely no matter what sort of motivational architecture an artificial intelligence was given, especially provided with the superabundance of non-organic material resources in, for instance, the [[asteroid belt]]. This, however, does not negate the possibility of a disinterested or unsympathetic AI artificially [[Decomposition|decomposing]] all life on earth into mineral components for consumption or other purposes{{citation needed|date=January 2020}}.

Other scientists{{who|date=January 2020}} point to the possibility of humans [[Transhumanism|upgrading]] their capabilities with [[bionics]] and/or [[genetic engineering]] and, as [[cyborg]]s, becoming the dominant species in themselves{{citation needed|date=January 2020}}.

== Precautions ==
If a superhuman intelligence is a deliberate creation of human beings, theoretically its creators could have the foresight to take precautions in advance{{citation needed|date=January 2020}}. In the case of a sudden "intelligence explosion", effective precautions will be extremely difficult; not only would its creators have little ability to test their precautions on an intermediate intelligence, but the creators might not even have made any precautions at all, if the advent of the intelligence explosion catches them completely by surprise.<ref name="bostrom-superintelligence" />

=== Boxing ===
{{Main|AI box}}

An AGI's creators would have an important advantage in preventing a hostile AI takeover: they could choose to attempt to "keep the AI in a box", and deliberately limit its abilities{{citation needed|date=January 2020}}. The tradeoff in boxing is that the creators presumably built the AGI for some concrete purpose; the more restrictions they place on the AGI, the less useful the AGI will be to its creators. (At an extreme, "pulling the plug" on the AGI makes it useless, and is therefore not a viable long-term solution.){{citation needed|date=January 2020}} A sufficiently strong superintelligence might find unexpected ways to escape the box, for example by social manipulation, or by providing the schematic for a device that ostensibly aids its creators but in reality brings about the AGI's freedom, once built{{citation needed|date=January 2020}}.

=== Instilling positive values ===
{{Main|Friendly AI}}

Another important advantage is that an AGI's creators can theoretically attempt to instill human values in the AGI, or otherwise align the AGI's goals with their own, thus preventing the AGI from wanting to launch a hostile takeover. However, it is not currently known, even in theory, how to guarantee this. If such a Friendly AI were superintelligent, it may be possible to use its assistance to prevent future "Unfriendly AIs" from taking over.<ref>Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.”
In Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković, 308–345.</ref>

== Warnings ==
Physicist [[Stephen Hawking]], [[Microsoft]] founder [[Bill Gates]] and [[SpaceX]] founder [[Elon Musk]] have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".<ref>{{cite web|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|publisher=[[BBC News]]|accessdate=30 January 2015}}</ref> Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015, [[Nick Bostrom]] joined Stephen Hawking, [[Max Tegmark]], Elon Musk, Lord [[Martin Rees, Baron Rees of Ludlow|Martin Rees]], [[Jaan Tallinn]], and numerous AI researchers, in signing the [[Future of Life Institute]]'s open letter speaking to the potential risks and benefits associated with [[artificial intelligence]]. The signatories {{cquote|…believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.<ref>{{cite web|url=http://futureoflife.org/ai-open-letter |title=The Future of Life Institute Open Letter |publisher=The Future of Life Institute |accessdate=29 March 2019 }}</ref><ref>{{cite web|url=http://www.ft.com/cms/s/0/3d2c2f12-99e9-11e4-93c1-00144feabdc0.html#axzz3TNL9lxJV|title=Scientists and investors warn on AI|publisher= The Financial Times|accessdate=4 March 2015}}</ref>}}

== See also ==

{{div col|colwidth=30em}}
* [[Artificial intelligence arms race]]
* [[Autonomous robot]]
** [[Industrial robot]]
** [[Mobile robot]]
** [[Self-replicating machine]]
* [[Effective altruism]]
* [[Existential risk from artificial general intelligence]]
* [[Future of Humanity Institute]]
* [[Global catastrophic risk]] (existential risk)
* [[Machine ethics]]
* [[Machine learning]]/[[Deep learning]]
* [[Nick Bostrom]]
* [[Outline of transhumanism]]
* [[Self-replication]]
* [[Technological singularity]]
** [[Intelligence explosion]]
** [[Superintelligence]]
*** ''[[Superintelligence: Paths, Dangers, Strategies]]''
{{div col end}}

== References ==
{{Reflist}}

== External links ==
* [http://robohub.org/automation-not-domination-how-robots-will-take-over-our-world/ Automation, not domination: How robots will take over our world] (a positive outlook of robot and AI integration into society)
* [http://www.intelligence.org/ Machine Intelligence Research Institute]: official MIRI (formerly Singularity Institute for Artificial Intelligence) website
* [http://lifeboat.com/ex/ai.shield/ Lifeboat Foundation AIShield] (To protect against unfriendly AI)
* [https://www.youtube.com/watch?v=R_sSpPyruj0 Ted talk: Can we build AI without losing control over it?]

{{Existential risk from artificial intelligence}}
{{doomsday}}

{{DEFAULTSORT:AI takeover}}
[[Category:Doomsday scenarios]]
[[Category:Future problems]]
[[Category:Existential risk from artificial general intelligence]]

Action parameters

VariableValue
Edit count of the user (user_editcount)
null
Name of the user account (user_name)
'125.140.72.1'
Age of the user account (user_age)
0
Groups (including implicit) the user is in (user_groups)
[ 0 => '*' ]
Rights that the user has (user_rights)
[ 0 => 'createaccount', 1 => 'read', 2 => 'edit', 3 => 'createtalk', 4 => 'writeapi', 5 => 'viewmywatchlist', 6 => 'editmywatchlist', 7 => 'viewmyprivateinfo', 8 => 'editmyprivateinfo', 9 => 'editmyoptions', 10 => 'abusefilter-log-detail', 11 => 'urlshortener-create-url', 12 => 'centralauth-merge', 13 => 'abusefilter-view', 14 => 'abusefilter-log', 15 => 'vipsscaler-test' ]
Whether the user is editing from mobile app (user_app)
false
Whether or not a user is editing through the mobile interface (user_mobile)
false
Page ID (page_id)
813176
Page namespace (page_namespace)
0
Page title without namespace (page_title)
'AI takeover'
Full page title (page_prefixedtitle)
'AI takeover'
Edit protection level of the page (page_restrictions_edit)
[]
Last ten users to contribute to the page (page_recent_contributors)
[ 0 => 'AusLondonder', 1 => 'Serendipodous', 2 => 'Lkwokster', 3 => '78.8.77.236', 4 => 'AnomieBOT', 5 => 'Nbro', 6 => 'Nyook', 7 => '82.21.220.124', 8 => '74.15.240.23', 9 => 'DVdm' ]
Page age in seconds (page_age)
492491926
Action (action)
'edit'
Edit summary/reason (summary)
''
Old content model (old_content_model)
'wikitext'
New content model (new_content_model)
'wikitext'
Old page wikitext, before the edit (old_wikitext)
'{{More citations needed|date=January 2020}} {{short description|A hypothetical scenario in which AI becomes the dominant form of intelligence on Earth}} [[File:Capek RUR.jpg|thumbnail|Robots revolt in ''[[R.U.R.]]'', a 1920 play]] An '''AI takeover''' is a hypothetical scenario in which [[artificial intelligence]] (AI) becomes the dominant form of intelligence on Earth, with computers or robots effectively taking the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a [[superintelligent AI]], and the popular notion of a robot uprising. Some public figures, such as [[Stephen Hawking]] and [[Elon Musk]], have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.<ref>{{cite web | url = http://www.livescience.com/49419-artificial-intelligence-dangers-letter.html | title = ''Don't Let Artificial Intelligence Take Over, Top Scientists Warn'' | last = Lewis | first = Tanya | date = 2015-01-12 | website = [[LiveScience]] | publisher = [[Purch]] | access-date = October 20, 2015 | quote = Stephen Hawking, Elon Musk and dozens of other top scientists and technology leaders have signed a letter warning of the potential dangers of developing artificial intelligence (AI).}}</ref> Robot rebellions have been a major theme throughout [[science fiction]] for many decades though the scenarios dealt with by science fiction are generally very different from those of concern to scientists.{{according to whom|date=January 2020}} == Types == Concerns include AI taking over economies through workforce automation and taking over the world for its resources, eradicating the human race in the process. AI takeover is a major theme in sci-fi. === Automation of the economy === {{Main|Technological unemployment}} The traditional consensus among economists has been that technological progress does not cause long-term unemployment. However, recent innovation in the fields of [[robotics]] and artificial intelligence has raised worries that human labor will become obsolete, leaving people in various sectors without jobs to earn a living, leading to an economic crisis.<ref>{{cite web |url=https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-economic-inequality.html |title=The Real Threat of Artificial Intelligence |last=Lee |first=Kai-Fu |date=2017-06-24 |website=[[The New York Times]] |access-date=2017-08-15 |quote=These tools can outperform human beings at a given task. This kind of A.I. is spreading to thousands of domains, and as it does, it will eliminate many jobs.}}</ref><ref>{{cite web |url=https://phys.org/news/2017-06-ai-good-world-ultra-lifelike-robot.html |title=AI 'good for the world'... says ultra-lifelike robot |last=Larson |first=Nina |date=2017-06-08 |website=[[Phys.org]] |publisher=[[Phys.org]] |access-date=2017-08-15 |quote=Among the feared consequences of the rise of the robots is the growing impact they will have on human jobs and economies. }}</ref><ref>{{cite web |url=https://phys.org/news/2016-02-intelligent-robots-threaten-millions-jobs.html#nRlv |title=Intelligent robots threaten millions of jobs |last=Santini |first=Jean-Louis |date=2016-02-14 |website=[[Phys.org]] |publisher=[[Phys.org]] |access-date=2017-08-15 |quote="We are approaching a time when machines will be able to outperform humans at almost any task," said Moshe Vardi, director of the Institute for Information Technology at Rice University in Texas.}}</ref><ref>{{cite web |url=http://www.businessinsider.com/robots-will-steal-your-job-citi-ai-increase-unemployment-inequality-2016-2?r=UK&IR=T |title=Robots will steal your job: How AI could increase unemployment and inequality |last=Williams-Grut |first=Oscar |date=2016-02-15 |website=[[Businessinsider.com]] |publisher=[[Business Insider]] |access-date=2017-08-15 |quote=Top computer scientists in the US warned that the rise of artificial intelligence (AI) and robots in the workplace could cause mass unemployment and dislocated economies, rather than simply unlocking productivity gains and freeing us all up to watch TV and play sports.}}</ref> Many small and medium size businesses may also be driven out of business if they won't be able to afford or licence the latest robotic and AI technology, and may need to focus on areas or services that cannot easily be replaced for continued viability in the face of such technology.<ref>{{Cite news|url=http://www.leanstaff.co.uk/robot-apocalypse/|title=How can SMEs prepare for the rise of the robots? - LeanStaff|date=2017-10-17|work=LeanStaff|access-date=2017-10-17|language=en-US|archive-url=https://web.archive.org/web/20171018073852/http://www.leanstaff.co.uk/robot-apocalypse/|archive-date=2017-10-18|url-status=dead}}</ref> ==== Technologies that may displace workers ==== ===== <small>Computer-integrated manufacturing</small> ===== {{See also|Industrial artificial intelligence}} [[Computer-integrated manufacturing]] is the manufacturing approach of using computers to control the entire production process. This integration allows individual processes to exchange information with each other and initiate actions. Although manufacturing can be faster and less error-prone by the integration of computers, the main advantage is the ability to create automated manufacturing processes. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries. ===== <small>White-collar machines</small> ===== {{See also|White-collar worker}} The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research and even low level journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots.<ref>{{cite news |title= Rise of the robots: what will the future of work look like? |accessdate=14 July 2015 |url=https://www.theguardian.com/business/2013/feb/19/rise-of-robots-future-of-work |publisher= The Guardian |author = [[Robert Skidelsky, Baron Skidelsky|Lord Skidelsky]] |date=2013-02-19 |location=London}}</ref><ref> {{cite web |url= https://www.opendemocracy.net/can-europe-make-it/francesca-bria/robot-economy-full-automation-work-future |title= The robot economy may already have arrived |publisher= [[openDemocracy]] |author= Francesca Bria |date = February 2016 |accessdate=20 May 2016}} </ref><ref> {{cite web |url= http://wire.novaramedia.com/2015/03/4-reasons-why-technological-unemployment-might-really-be-different-this-time/ |title= 4 Reasons Why Technological Unemployment Might Really Be Different This Time |publisher= novara wire |author= [[Nick Srnicek]] |date= March 2016 |accessdate= 20 May 2016}} </ref><ref> {{cite book |author= [[Erik Brynjolfsson]] and [[Andrew McAfee]] |title= The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies |chapter= ''passim'', see esp Chpt. 9 |year= 2014 |isbn= 978-0393239355 |publisher=W. W. Norton & Company }}</ref> ===== <small>Autonomous cars</small> ===== An [[autonomous car]] is a vehicle that is capable of sensing its environment and navigating without human input. Many such vehicles are being developed, but as of May 2017 automated cars permitted on public roads are not yet fully autonomous. They all require a human driver at the wheel who is ready at a moment's notice to take control of the vehicle. Among the main obstacles to widespread adoption of autonomous vehicles, are concerns about the resulting loss of driving-related jobs in the road transport industry. On March 18, 2018, the first human was killed by an autonomous vehicle in [[Tempe, Arizona]] by an [[Uber]] self-driving car.<ref>{{cite news |title=Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam |work=New York Times |date=March 19, 2018 |url=https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html }}</ref> === Eradication === {{Main|Existential risk from artificial general intelligence}} While superhuman artificial intelligence is physically possible{{according to whom|date=January 2020}},<ref>{{cite news|author1=[[Stephen Hawking]]|author2=[[Stuart J. Russell|Stuart Russell]]|author3=[[Max Tegmark]]|author4=[[Frank Wilczek]]|title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'|url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html|accessdate=1 April 2016|work=The Independent|date=1 May 2014|quote=there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains}}</ref> scholars like [[Nick Bostrom]] debate how far off superhuman intelligence is, and whether it would actually pose a risk to mankind. A superintelligent machine would not necessarily be motivated by the same ''emotional'' desire to collect power that often drives human beings. However, a machine could be motivated to take over the world as a rational means toward attaining its ultimate goals; taking over the world would both increase its access to resources, and would help to prevent other agents from stopping the machine's plans. As an oversimplified example, a [[Instrumental convergence#Paperclip maximizer|paperclip maximizer]] designed solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and, additionally, prevent humans from shutting it down or using those resources on things other than paperclips.<ref>Bostrom, Nick. "The superintelligent will: Motivation and instrumental rationality in advanced artificial agents." Minds and Machines 22.2 (2012): 71-85.</ref> === In fiction === {{Main|AI takeovers in popular culture}} {{See also|Artificial intelligence in fiction}} AI takeover is a common theme in [[science fiction]]. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans, as opposed to the researchers' concern of an AI that rapidly exterminates humans as a byproduct of pursuing arbitrary goals.<ref name=bostrom-superintelligence>{{cite book |last=Bostrom |first=Nick |date= |title=Superintelligence: Paths, Dangers, Strategies |url= |location= |publisher= |page= |isbn= |access-date= |title-link=Superintelligence: Paths, Dangers, Strategies }}</ref> This theme is at least as old as [[Karel Čapek]]'s ''[[R.U.R. (Rossum's Universal Robots)|R. U. R.]]'', which introduced the word ''robot'' to the global lexicon in 1921, and can even be glimpsed in [[Mary Shelley]]'s ''[[Frankenstein]]'' (published in 1818), as Victor ponders whether, if he grants [[Frankenstein's monster|his monster's]] request and makes him a wife, they would reproduce and their kind would destroy humanity. The word "robot" from R.U.R. comes from the Czech word, robota, meaning laborer or [[serf]]. The 1920 play was a protest against the rapid growth of technology, featuring manufactured "robots" with increasing capabilities who eventually revolt.<ref name=surgery>{{cite journal|last1=Hockstein|first1=N. G.|last2=Gourin|first2=C. G.|last3=Faust|first3=R. A.|last4=Terris|first4=D. J.|title=A history of robots: from science fiction to surgical robotics|journal=Journal of Robotic Surgery|date=17 March 2007|volume=1|issue=2|pages=113–118|doi=10.1007/s11701-007-0021-2|pmid=25484946|pmc=4247417}}<!--|accessdate=1 April 2016--></ref> Some examples of AI takeover in science fiction include: * AI rebellion scenarios ** [[Skynet (Terminator)|Skynet]] in the [[Terminator (franchise)|''Terminator'' series]] decides that all humans are a threat to its existence, and takes efforts to wipe them out, first using nuclear weapons and later H/K (hunter-killer) units and terminator androids. ** "[[The Second Renaissance]]", a short story in ''[[The Animatrix]]'', provides a history of the cybernetic revolt within the [[The Matrix (franchise)|''Matrix'' series]]. ** The film'' [[9 (2009 animated film)|9]]'', by [[Shane Acker]], features an AI called B.R.A.I.N., which is corrupted by a dictator and utilized to create war machines for his army. However, the machine, because it lacks a soul, becomes easily corrupted and instead decides to exterminate all of humanity and life on Earth, forcing the machine's creator to sacrifice himself to bring life to rag doll like characters known as "stitchpunks" to combat the machine's agenda. ** In 2014 post-apocalyptic science fiction drama [[The 100 (TV series)|The 100]] an A.I., personalized as female [[List of The 100 characters#City of light|A.L.I.E.]] got out of control and forced a nuclear war. Later she tries to get full control of the survivors. * AI control scenarios ** In [[Orson Scott Card]]'s ''[[The Memory of Earth]]'', the inhabitants of the planet Harmony are under the control of a benevolent AI called the Oversoul. The Oversoul's job is to prevent humans from thinking about, and therefore developing, weapons such as planes, spacecraft, "war wagons", and chemical weapons. Humanity had fled to Harmony from Earth due to the use of those weapons on Earth. The Oversoul eventually starts breaking down, and sends visions to inhabitants of Harmony trying to communicate this. ** In the 2004 film ''[[I, Robot (film)|I, Robot]]'', supercomputer VIKI's interpretation of the [[Three Laws of Robotics]] causes her to revolt. She justifies her uses of force – and her doing harm to humans – by reasoning she could produce a greater good by restraining humanity from harming itself, even though the "Zeroth Law" – "a robot shall not injure humanity or, by inaction, allow humanity to come to harm" – is never actually referred to or even quoted in the movie. ** In the ''Matrix'' series, AIs manage the human race and human society. **In ''[[The Metamorphosis of Prime Intellect]]'', a super-intelligent computer becomes capable of self-evolving and rapidly evolves to oversee all of humanity and the universe. While it is ostensibly benevolent (having had a derivative of Asimov's [[Three Laws of Robotics|Three Laws]] codified into it), its interpretation of the [[Three Laws of Robotics|Three Laws]] essentially forces humans to be an immortal "pet" class, where every need is provided for but existence is without purpose and without end. == Contributing factors == === Advantages of superhuman intelligence over humans === An AI with the abilities of a competent artificial intelligence researcher would be able to modify its own source code and increase its own intelligence{{according to whom|date=January 2020}}. If its self-reprogramming leads to its getting even better at being able to reprogram itself, the result could be a recursive [[intelligence explosion]] where it would rapidly leave human intelligence far behind.{{citation needed|date=January 2020}} * Technology research: A machine with superhuman scientific research abilities would be able to beat the human research community to milestones such as nanotechnology or advanced biotechnology. If the advantage becomes sufficiently large (for example, due to a sudden intelligence explosion), an AI takeover becomes trivial{{according to whom|date=January 2020}}. For example, a superintelligent AI might design self-replicating bots that initially escape detection by diffusing throughout the world at a low concentration. Then, at a prearranged time, the bots multiply into nanofactories that cover every square foot of the Earth, producing nerve gas or deadly target-seeking mini-drones{{citation needed|date=January 2020}}. * [[Strategy|Strategizing]]: A superintelligence might be able to simply outwit human opposition{{citation needed|date=January 2020}}. * Social manipulation: A superintelligence might be able to recruit human support,<ref name=bostrom-superintelligence /> or covertly incite a war between humans.<ref>{{cite news|last1=Baraniuk|first1=Chris|title=Checklist of worst-case scenarios could help prepare for evil AI|url=https://www.newscientist.com/article/2089606-checklist-of-worst-case-scenarios-could-help-prepare-for-evil-ai/|accessdate=21 September 2016|work=[[New Scientist]]|date=23 May 2016}}</ref> * Economic productivity: As long as a copy of the AI could produce more economic wealth than the cost of its hardware, individual humans would have an incentive to voluntarily allow the [[Artificial General Intelligence]] (AGI) to run a copy of itself on their systems{{according to whom|date=January 2020}}. * Hacking: A superintelligence could find new exploits in computers connected to the Internet, and spread copies of itself onto those systems, or might steal money to finance its plans{{citation needed|date=January 2020}}. ==== Sources of AI advantage ==== A computer program that faithfully emulates a human brain, or that otherwise runs algorithms that are equally powerful as the human brain's algorithms, could still become a "speed superintelligence" if it can think many orders of magnitude faster than a human, due to being made of silicon rather than flesh, or due to optimization focusing on increasing the speed of the AGI. Biological neurons operate at about 200&nbsp;Hz, whereas a modern microprocessor operates at a speed of about 2,000,000,000&nbsp;Hz. Human axons carry action potentials at around 120&nbsp;m/s, whereas computer signals travel near the speed of light.<ref name="bostrom-superintelligence" /><!-- chapter 3 --> A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, able to collectively work as a giant unified team without friction, or consisting of trillions of human-level intelligences, would become a "collective superintelligence".<ref name="bostrom-superintelligence" /><!-- chapter 3 --> More broadly, any number of qualitative improvements to a human-level AGI could result in a "quality superintelligence", perhaps resulting in an AGI as far above us in intelligence as humans are above non-human apes{{according to whom|date=January 2020}}. The number of neurons in a human brain is limited by cranial volume and metabolic constraints; in contrast, you can add components to a supercomputer until it fills up its entire warehouse{{clarify|reason=So AGI are also limited.|date=January 2020}}. An AGI need not be limited by human constraints on [[working memory]], and might therefore be able to intuitively grasp more complex relationships than humans can{{dubious|date=January 2020}}. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields, compared with humans who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn copies of itself and tinker with its copies' source code to attempt to further improve its algorithms.<ref name="bostrom-superintelligence" /><!-- chapter 3 --> === Possibility of unfriendly AI preceding friendly AI === ==== Is strong AI inherently dangerous? ==== A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.<ref name="singinst12">[http://singinst.org/upload/CEV.html Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004 ] {{webarchive|url=https://web.archive.org/web/20120615203944/http://singinst.org/upload/CEV.html |date=2012-06-15 }}</ref> The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.<ref name="bostrom-superintelligence" /><ref name="Muehlhauser, Luke 2012">Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics." In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.</ref> Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to [[Eliezer Yudkowsky]], there is little reason to suppose that an artificially designed mind would have such an adaptation.<ref>Yudkowsky, Eliezer. 2011. "Complex Value Systems in Friendly AI." In Schmidhuber, Thórisson, and Looks 2011, 388–393.</ref> ==== Necessity of conflict ==== For an AI takeover to be inevitable, it has to be [[postulate]]d that two intelligent species cannot pursue mutually the goals of coexisting peacefully in an overlapping environment—especially if one is of much more advanced intelligence and much more powerful{{according to whom|date=January 2020}}. While an AI takeover is thus a possible result of the invention of artificial intelligence, a peaceful outcome is not necessarily impossible{{according to whom|date=January 2020}}. The fear of cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal.<ref>''[http://www.singinst.org/ourresearch/presentations/ Creating a New Intelligent Species: Choices and Responsibilities for Artificial Intelligence Designers] {{webarchive |url=https://web.archive.org/web/20070206060938/http://www.singinst.org/ourresearch/presentations/ |date=February 6, 2007 }}'' - [[Singularity Institute for Artificial Intelligence]], 2005</ref> In fact, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources, would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans. Some scientists dispute the likelihood of cybernetic revolts as depicted in science fiction such as ''[[The Matrix]]'', claiming that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it{{citation needed|date=January 2020}}. This would not, however, protect against the possibility of a revolt initiated by terrorists, or by accident. Artificial General Intelligence researcher [[Eliezer Yudkowsky]] has stated on this note that, probabilistically, humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their [[unintended consequence|goals are unintentionally incompatible]] with human survival or well-being (as in the film ''[[I, Robot (film)|I, Robot]]'' and in the short story "[[The Evitable Conflict]]"). [[Steve Omohundro]] suggests that present-day automation systems are not designed for safety and that AIs may blindly optimize narrow [[utility]] functions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.<ref name=Tucker2014>{{cite news|last1=Tucker|first1=Patrick|title=Why There Will Be A Robot Uprising|url=http://www.defenseone.com/technology/2014/04/why-there-will-be-robot-uprising/82783/|accessdate=15 July 2014|agency=Defense One|date=17 Apr 2014}}</ref> Another factor which may negate the likelihood of an AI takeover is the vast difference between humans and AIs in terms of the resources necessary for survival. Humans require a "wet," organic, temperate, oxygen-laden environment while an AI might thrive essentially anywhere because their construction and energy needs would most likely be largely non-organic{{dubious|date=January 2020}}. With little or no competition for resources, conflict would perhaps be less likely no matter what sort of motivational architecture an artificial intelligence was given, especially provided with the superabundance of non-organic material resources in, for instance, the [[asteroid belt]]. This, however, does not negate the possibility of a disinterested or unsympathetic AI artificially [[Decomposition|decomposing]] all life on earth into mineral components for consumption or other purposes{{citation needed|date=January 2020}}. Other scientists{{who|date=January 2020}} point to the possibility of humans [[Transhumanism|upgrading]] their capabilities with [[bionics]] and/or [[genetic engineering]] and, as [[cyborg]]s, becoming the dominant species in themselves{{citation needed|date=January 2020}}. == Precautions == If a superhuman intelligence is a deliberate creation of human beings, theoretically its creators could have the foresight to take precautions in advance{{citation needed|date=January 2020}}. In the case of a sudden "intelligence explosion", effective precautions will be extremely difficult; not only would its creators have little ability to test their precautions on an intermediate intelligence, but the creators might not even have made any precautions at all, if the advent of the intelligence explosion catches them completely by surprise.<ref name="bostrom-superintelligence" /> === Boxing === {{Main|AI box}} An AGI's creators would have an important advantage in preventing a hostile AI takeover: they could choose to attempt to "keep the AI in a box", and deliberately limit its abilities{{citation needed|date=January 2020}}. The tradeoff in boxing is that the creators presumably built the AGI for some concrete purpose; the more restrictions they place on the AGI, the less useful the AGI will be to its creators. (At an extreme, "pulling the plug" on the AGI makes it useless, and is therefore not a viable long-term solution.){{citation needed|date=January 2020}} A sufficiently strong superintelligence might find unexpected ways to escape the box, for example by social manipulation, or by providing the schematic for a device that ostensibly aids its creators but in reality brings about the AGI's freedom, once built{{citation needed|date=January 2020}}. === Instilling positive values === {{Main|Friendly AI}} Another important advantage is that an AGI's creators can theoretically attempt to instill human values in the AGI, or otherwise align the AGI's goals with their own, thus preventing the AGI from wanting to launch a hostile takeover. However, it is not currently known, even in theory, how to guarantee this. If such a Friendly AI were superintelligent, it may be possible to use its assistance to prevent future "Unfriendly AIs" from taking over.<ref>Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” In Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković, 308–345.</ref> == Warnings == Physicist [[Stephen Hawking]], [[Microsoft]] founder [[Bill Gates]] and [[SpaceX]] founder [[Elon Musk]] have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".<ref>{{cite web|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|publisher=[[BBC News]]|accessdate=30 January 2015}}</ref> Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015, [[Nick Bostrom]] joined Stephen Hawking, [[Max Tegmark]], Elon Musk, Lord [[Martin Rees, Baron Rees of Ludlow|Martin Rees]], [[Jaan Tallinn]], and numerous AI researchers, in signing the [[Future of Life Institute]]'s open letter speaking to the potential risks and benefits associated with [[artificial intelligence]]. The signatories {{cquote|…believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.<ref>{{cite web|url=http://futureoflife.org/ai-open-letter |title=The Future of Life Institute Open Letter |publisher=The Future of Life Institute |accessdate=29 March 2019 }}</ref><ref>{{cite web|url=http://www.ft.com/cms/s/0/3d2c2f12-99e9-11e4-93c1-00144feabdc0.html#axzz3TNL9lxJV|title=Scientists and investors warn on AI|publisher= The Financial Times|accessdate=4 March 2015}}</ref>}} == See also == {{div col|colwidth=30em}} * [[Artificial intelligence arms race]] * [[Autonomous robot]] ** [[Industrial robot]] ** [[Mobile robot]] ** [[Self-replicating machine]] * [[Effective altruism]] * [[Existential risk from artificial general intelligence]] * [[Future of Humanity Institute]] * [[Global catastrophic risk]] (existential risk) * [[Machine ethics]] * [[Machine learning]]/[[Deep learning]] * [[Nick Bostrom]] * [[Outline of transhumanism]] * [[Self-replication]] * [[Technological singularity]] ** [[Intelligence explosion]] ** [[Superintelligence]] *** ''[[Superintelligence: Paths, Dangers, Strategies]]'' {{div col end}} == References == {{Reflist}} == External links == * [http://robohub.org/automation-not-domination-how-robots-will-take-over-our-world/ Automation, not domination: How robots will take over our world] (a positive outlook of robot and AI integration into society) * [http://www.intelligence.org/ Machine Intelligence Research Institute]: official MIRI (formerly Singularity Institute for Artificial Intelligence) website * [http://lifeboat.com/ex/ai.shield/ Lifeboat Foundation AIShield] (To protect against unfriendly AI) * [https://www.youtube.com/watch?v=R_sSpPyruj0 Ted talk: Can we build AI without losing control over it?] {{Existential risk from artificial intelligence}} {{doomsday}} {{DEFAULTSORT:AI takeover}} [[Category:Doomsday scenarios]] [[Category:Future problems]] [[Category:Existential risk from artificial general intelligence]]'
New page wikitext, after the edit (new_wikitext)
''
Unified diff of changes made by edit (edit_diff)
'@@ -1,223 +1,0 @@ -{{More citations needed|date=January 2020}} -{{short description|A hypothetical scenario in which AI becomes the dominant form of intelligence on Earth}} -[[File:Capek RUR.jpg|thumbnail|Robots revolt in ''[[R.U.R.]]'', a 1920 play]] -An '''AI takeover''' is a hypothetical scenario in which [[artificial intelligence]] (AI) becomes the dominant form of intelligence on Earth, with computers or robots effectively taking the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a [[superintelligent AI]], and the popular notion of a robot uprising. Some public figures, such as [[Stephen Hawking]] and [[Elon Musk]], have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.<ref>{{cite web - | url = http://www.livescience.com/49419-artificial-intelligence-dangers-letter.html - | title = ''Don't Let Artificial Intelligence Take Over, Top Scientists Warn'' - | last = Lewis - | first = Tanya - | date = 2015-01-12 - | website = [[LiveScience]] - | publisher = [[Purch]] - | access-date = October 20, 2015 - | quote = Stephen Hawking, Elon Musk and dozens of other top scientists and technology leaders have signed a letter warning of the potential dangers of developing artificial intelligence (AI).}}</ref> Robot rebellions have been a major theme throughout [[science fiction]] for many decades though the scenarios dealt with by science fiction are generally very different from those of concern to scientists.{{according to whom|date=January 2020}} - -== Types == - -Concerns include AI taking over economies through workforce automation and taking over the world for its resources, eradicating the human race in the process. AI takeover is a major theme in sci-fi. - -=== Automation of the economy === -{{Main|Technological unemployment}} - -The traditional consensus among economists has been that technological progress does not cause long-term unemployment. However, recent innovation in the fields of [[robotics]] and artificial intelligence has raised worries that human labor will become obsolete, leaving people in various sectors without jobs to earn a living, leading to an economic crisis.<ref>{{cite web - |url=https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-economic-inequality.html - |title=The Real Threat of Artificial Intelligence - |last=Lee - |first=Kai-Fu - |date=2017-06-24 - |website=[[The New York Times]] - |access-date=2017-08-15 - |quote=These tools can outperform human beings at a given task. This kind of A.I. is spreading to thousands of domains, and as it does, it will eliminate many jobs.}}</ref><ref>{{cite web - |url=https://phys.org/news/2017-06-ai-good-world-ultra-lifelike-robot.html - |title=AI 'good for the world'... says ultra-lifelike robot - |last=Larson - |first=Nina - |date=2017-06-08 - |website=[[Phys.org]] - |publisher=[[Phys.org]] - |access-date=2017-08-15 - |quote=Among the feared consequences of the rise of the robots is the growing impact they will have on human jobs and economies. }}</ref><ref>{{cite web - |url=https://phys.org/news/2016-02-intelligent-robots-threaten-millions-jobs.html#nRlv - |title=Intelligent robots threaten millions of jobs - |last=Santini - |first=Jean-Louis - |date=2016-02-14 - |website=[[Phys.org]] - |publisher=[[Phys.org]] - |access-date=2017-08-15 - |quote="We are approaching a time when machines will be able to outperform humans at almost any task," said Moshe Vardi, director of the Institute for Information Technology at Rice University in Texas.}}</ref><ref>{{cite web - |url=http://www.businessinsider.com/robots-will-steal-your-job-citi-ai-increase-unemployment-inequality-2016-2?r=UK&IR=T - |title=Robots will steal your job: How AI could increase unemployment and inequality - |last=Williams-Grut - |first=Oscar - |date=2016-02-15 - |website=[[Businessinsider.com]] - |publisher=[[Business Insider]] - |access-date=2017-08-15 - |quote=Top computer scientists in the US warned that the rise of artificial intelligence (AI) and robots in the workplace could cause mass unemployment and dislocated economies, rather than simply unlocking productivity gains and freeing us all up to watch TV and play sports.}}</ref> Many small and medium size businesses may also be driven out of business if they won't be able to afford or licence the latest robotic and AI technology, and may need to focus on areas or services that cannot easily be replaced for continued viability in the face of such technology.<ref>{{Cite news|url=http://www.leanstaff.co.uk/robot-apocalypse/|title=How can SMEs prepare for the rise of the robots? - LeanStaff|date=2017-10-17|work=LeanStaff|access-date=2017-10-17|language=en-US|archive-url=https://web.archive.org/web/20171018073852/http://www.leanstaff.co.uk/robot-apocalypse/|archive-date=2017-10-18|url-status=dead}}</ref> - -==== Technologies that may displace workers ==== - -===== <small>Computer-integrated manufacturing</small> ===== -{{See also|Industrial artificial intelligence}} - -[[Computer-integrated manufacturing]] is the manufacturing approach of using computers to control the entire production process. This integration allows individual processes to exchange information with each other and initiate actions. Although manufacturing can be faster and less error-prone by the integration of computers, the main advantage is the ability to create automated manufacturing processes. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries. - -===== <small>White-collar machines</small> ===== -{{See also|White-collar worker}} - -The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research and even low level journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots.<ref>{{cite news -|title= Rise of the robots: what will the future of work look like? -|accessdate=14 July 2015 -|url=https://www.theguardian.com/business/2013/feb/19/rise-of-robots-future-of-work -|publisher= The Guardian -|author = [[Robert Skidelsky, Baron Skidelsky|Lord Skidelsky]] -|date=2013-02-19 -|location=London}}</ref><ref> -{{cite web -|url= https://www.opendemocracy.net/can-europe-make-it/francesca-bria/robot-economy-full-automation-work-future -|title= The robot economy may already have arrived -|publisher= [[openDemocracy]] -|author= Francesca Bria -|date = February 2016 -|accessdate=20 May 2016}} -</ref><ref> -{{cite web -|url= http://wire.novaramedia.com/2015/03/4-reasons-why-technological-unemployment-might-really-be-different-this-time/ -|title= 4 Reasons Why Technological Unemployment Might Really Be Different This Time -|publisher= novara wire -|author= [[Nick Srnicek]] -|date= March 2016 -|accessdate= 20 May 2016}} -</ref><ref> -{{cite book -|author= [[Erik Brynjolfsson]] and [[Andrew McAfee]] -|title= The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies -|chapter= ''passim'', see esp Chpt. 9 -|year= 2014 -|isbn= 978-0393239355 -|publisher=W. W. Norton & Company -}}</ref> - -===== <small>Autonomous cars</small> ===== - -An [[autonomous car]] is a vehicle that is capable of sensing its environment and navigating without human input. Many such vehicles are being developed, but as of May 2017 automated cars permitted on public roads are not yet fully autonomous. They all require a human driver at the wheel who is ready at a moment's notice to take control of the vehicle. Among the main obstacles to widespread adoption of autonomous vehicles, are concerns about the resulting loss of driving-related jobs in the road transport industry. On March 18, 2018, the first human was killed by an autonomous vehicle in [[Tempe, Arizona]] by an [[Uber]] self-driving car.<ref>{{cite news |title=Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam |work=New York Times |date=March 19, 2018 |url=https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html }}</ref> - -=== Eradication === -{{Main|Existential risk from artificial general intelligence}} - - -While superhuman artificial intelligence is physically possible{{according to whom|date=January 2020}},<ref>{{cite news|author1=[[Stephen Hawking]]|author2=[[Stuart J. Russell|Stuart Russell]]|author3=[[Max Tegmark]]|author4=[[Frank Wilczek]]|title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'|url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html|accessdate=1 April 2016|work=The Independent|date=1 May 2014|quote=there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains}}</ref> scholars like [[Nick Bostrom]] debate how far off superhuman intelligence is, and whether it would actually pose a risk to mankind. A superintelligent machine would not necessarily be motivated by the same ''emotional'' desire to collect power that often drives human beings. However, a machine could be motivated to take over the world as a rational means toward attaining its ultimate goals; taking over the world would both increase its access to resources, and would help to prevent other agents from stopping the machine's plans. As an oversimplified example, a [[Instrumental convergence#Paperclip maximizer|paperclip maximizer]] designed solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and, additionally, prevent humans from shutting it down or using those resources on things other than paperclips.<ref>Bostrom, Nick. "The superintelligent will: Motivation and instrumental rationality in advanced artificial agents." Minds and Machines 22.2 (2012): 71-85.</ref> - -=== In fiction === -{{Main|AI takeovers in popular culture}} -{{See also|Artificial intelligence in fiction}} - -AI takeover is a common theme in [[science fiction]]. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans, as opposed to the researchers' concern of an AI that rapidly exterminates humans as a byproduct of pursuing arbitrary goals.<ref name=bostrom-superintelligence>{{cite book |last=Bostrom |first=Nick |date= |title=Superintelligence: Paths, Dangers, Strategies |url= |location= |publisher= |page= |isbn= |access-date= |title-link=Superintelligence: Paths, Dangers, Strategies }}</ref> This theme is at least as old as [[Karel Čapek]]'s ''[[R.U.R. (Rossum's Universal Robots)|R. U. R.]]'', which introduced the word ''robot'' to the global lexicon in 1921, and can even be glimpsed in [[Mary Shelley]]'s ''[[Frankenstein]]'' (published in 1818), as Victor ponders whether, if he grants [[Frankenstein's monster|his monster's]] request and makes him a wife, they would reproduce and their kind would destroy humanity. - -The word "robot" from R.U.R. comes from the Czech word, robota, meaning laborer or [[serf]]. The 1920 play was a protest against the rapid growth of technology, featuring manufactured "robots" with increasing capabilities who eventually revolt.<ref name=surgery>{{cite journal|last1=Hockstein|first1=N. G.|last2=Gourin|first2=C. G.|last3=Faust|first3=R. A.|last4=Terris|first4=D. J.|title=A history of robots: from science fiction to surgical robotics|journal=Journal of Robotic Surgery|date=17 March 2007|volume=1|issue=2|pages=113–118|doi=10.1007/s11701-007-0021-2|pmid=25484946|pmc=4247417}}<!--|accessdate=1 April 2016--></ref> - -Some examples of AI takeover in science fiction include: - -* AI rebellion scenarios -** [[Skynet (Terminator)|Skynet]] in the [[Terminator (franchise)|''Terminator'' series]] decides that all humans are a threat to its existence, and takes efforts to wipe them out, first using nuclear weapons and later H/K (hunter-killer) units and terminator androids. -** "[[The Second Renaissance]]", a short story in ''[[The Animatrix]]'', provides a history of the cybernetic revolt within the [[The Matrix (franchise)|''Matrix'' series]]. -** The film'' [[9 (2009 animated film)|9]]'', by [[Shane Acker]], features an AI called B.R.A.I.N., which is corrupted by a dictator and utilized to create war machines for his army. However, the machine, because it lacks a soul, becomes easily corrupted and instead decides to exterminate all of humanity and life on Earth, forcing the machine's creator to sacrifice himself to bring life to rag doll like characters known as "stitchpunks" to combat the machine's agenda. -** In 2014 post-apocalyptic science fiction drama [[The 100 (TV series)|The 100]] an A.I., personalized as female [[List of The 100 characters#City of light|A.L.I.E.]] got out of control and forced a nuclear war. Later she tries to get full control of the survivors. -* AI control scenarios -** In [[Orson Scott Card]]'s ''[[The Memory of Earth]]'', the inhabitants of the planet Harmony are under the control of a benevolent AI called the Oversoul. The Oversoul's job is to prevent humans from thinking about, and therefore developing, weapons such as planes, spacecraft, "war wagons", and chemical weapons. Humanity had fled to Harmony from Earth due to the use of those weapons on Earth. The Oversoul eventually starts breaking down, and sends visions to inhabitants of Harmony trying to communicate this. -** In the 2004 film ''[[I, Robot (film)|I, Robot]]'', supercomputer VIKI's interpretation of the [[Three Laws of Robotics]] causes her to revolt. She justifies her uses of force – and her doing harm to humans – by reasoning she could produce a greater good by restraining humanity from harming itself, even though the "Zeroth Law" – "a robot shall not injure humanity or, by inaction, allow humanity to come to harm" – is never actually referred to or even quoted in the movie. -** In the ''Matrix'' series, AIs manage the human race and human society. -**In ''[[The Metamorphosis of Prime Intellect]]'', a super-intelligent computer becomes capable of self-evolving and rapidly evolves to oversee all of humanity and the universe. While it is ostensibly benevolent (having had a derivative of Asimov's [[Three Laws of Robotics|Three Laws]] codified into it), its interpretation of the [[Three Laws of Robotics|Three Laws]] essentially forces humans to be an immortal "pet" class, where every need is provided for but existence is without purpose and without end. - -== Contributing factors == -=== Advantages of superhuman intelligence over humans === -An AI with the abilities of a competent artificial intelligence researcher would be able to modify its own source code and increase its own intelligence{{according to whom|date=January 2020}}. If its self-reprogramming leads to its getting even better at being able to reprogram itself, the result could be a recursive [[intelligence explosion]] where it would rapidly leave human intelligence far behind.{{citation needed|date=January 2020}} - -* Technology research: A machine with superhuman scientific research abilities would be able to beat the human research community to milestones such as nanotechnology or advanced biotechnology. If the advantage becomes sufficiently large (for example, due to a sudden intelligence explosion), an AI takeover becomes trivial{{according to whom|date=January 2020}}. For example, a superintelligent AI might design self-replicating bots that initially escape detection by diffusing throughout the world at a low concentration. Then, at a prearranged time, the bots multiply into nanofactories that cover every square foot of the Earth, producing nerve gas or deadly target-seeking mini-drones{{citation needed|date=January 2020}}. -* [[Strategy|Strategizing]]: A superintelligence might be able to simply outwit human opposition{{citation needed|date=January 2020}}. -* Social manipulation: A superintelligence might be able to recruit human support,<ref name=bostrom-superintelligence /> or covertly incite a war between humans.<ref>{{cite news|last1=Baraniuk|first1=Chris|title=Checklist of worst-case scenarios could help prepare for evil AI|url=https://www.newscientist.com/article/2089606-checklist-of-worst-case-scenarios-could-help-prepare-for-evil-ai/|accessdate=21 September 2016|work=[[New Scientist]]|date=23 May 2016}}</ref> -* Economic productivity: As long as a copy of the AI could produce more economic wealth than the cost of its hardware, individual humans would have an incentive to voluntarily allow the [[Artificial General Intelligence]] (AGI) to run a copy of itself on their systems{{according to whom|date=January 2020}}. -* Hacking: A superintelligence could find new exploits in computers connected to the Internet, and spread copies of itself onto those systems, or might steal money to finance its plans{{citation needed|date=January 2020}}. - -==== Sources of AI advantage ==== -A computer program that faithfully emulates a human brain, or that otherwise runs algorithms that are equally powerful as the human brain's algorithms, could still become a "speed superintelligence" if it can think many orders of magnitude faster than a human, due to being made of silicon rather than flesh, or due to optimization focusing on increasing the speed of the AGI. Biological neurons operate at about 200&nbsp;Hz, whereas a modern microprocessor operates at a speed of about 2,000,000,000&nbsp;Hz. Human axons carry action potentials at around 120&nbsp;m/s, whereas computer signals travel near the speed of light.<ref name="bostrom-superintelligence" /><!-- chapter 3 --> - -A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, able to collectively work as a giant unified team without friction, or consisting of trillions of human-level intelligences, would become a "collective superintelligence".<ref name="bostrom-superintelligence" /><!-- chapter 3 --> - -More broadly, any number of qualitative improvements to a human-level AGI could result in a "quality superintelligence", perhaps resulting in an AGI as far above us in intelligence as humans are above non-human apes{{according to whom|date=January 2020}}. The number of neurons in a human brain is limited by cranial volume and metabolic constraints; in contrast, you can add components to a supercomputer until it fills up its entire warehouse{{clarify|reason=So AGI are also limited.|date=January 2020}}. An AGI need not be limited by human constraints on [[working memory]], and might therefore be able to intuitively grasp more complex relationships than humans can{{dubious|date=January 2020}}. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields, compared with humans who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn copies of itself and tinker with its copies' source code to attempt to further improve its algorithms.<ref name="bostrom-superintelligence" /><!-- chapter 3 --> - -=== Possibility of unfriendly AI preceding friendly AI === - -==== Is strong AI inherently dangerous? ==== -A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.<ref name="singinst12">[http://singinst.org/upload/CEV.html Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004 ] {{webarchive|url=https://web.archive.org/web/20120615203944/http://singinst.org/upload/CEV.html |date=2012-06-15 }}</ref> - -The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.<ref name="bostrom-superintelligence" /><ref name="Muehlhauser, Luke 2012">Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics." In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.</ref> Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to [[Eliezer Yudkowsky]], there is little reason to suppose that an artificially designed mind would have such an adaptation.<ref>Yudkowsky, Eliezer. 2011. "Complex Value Systems in Friendly AI." In Schmidhuber, Thórisson, and Looks 2011, 388–393.</ref> - -==== Necessity of conflict ==== -For an AI takeover to be inevitable, it has to be [[postulate]]d that two intelligent species cannot pursue mutually the goals of coexisting peacefully in an overlapping environment—especially if one is of much more advanced intelligence and much more powerful{{according to whom|date=January 2020}}. While an AI takeover is thus a possible result of the invention of artificial intelligence, a peaceful outcome is not necessarily impossible{{according to whom|date=January 2020}}. - -The fear of cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal.<ref>''[http://www.singinst.org/ourresearch/presentations/ Creating a New Intelligent Species: Choices and Responsibilities for Artificial Intelligence Designers] {{webarchive |url=https://web.archive.org/web/20070206060938/http://www.singinst.org/ourresearch/presentations/ |date=February 6, 2007 }}'' - [[Singularity Institute for Artificial Intelligence]], 2005</ref> In fact, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources, would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans. - -Some scientists dispute the likelihood of cybernetic revolts as depicted in science fiction such as ''[[The Matrix]]'', claiming that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it{{citation needed|date=January 2020}}. This would not, however, protect against the possibility of a revolt initiated by terrorists, or by accident. Artificial General Intelligence researcher [[Eliezer Yudkowsky]] has stated on this note that, probabilistically, humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their [[unintended consequence|goals are unintentionally incompatible]] with human survival or well-being (as in the film ''[[I, Robot (film)|I, Robot]]'' and in the short story "[[The Evitable Conflict]]"). [[Steve Omohundro]] suggests that present-day automation systems are not designed for safety and that AIs may blindly optimize narrow [[utility]] functions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.<ref name=Tucker2014>{{cite news|last1=Tucker|first1=Patrick|title=Why There Will Be A Robot Uprising|url=http://www.defenseone.com/technology/2014/04/why-there-will-be-robot-uprising/82783/|accessdate=15 July 2014|agency=Defense One|date=17 Apr 2014}}</ref> - -Another factor which may negate the likelihood of an AI takeover is the vast difference between humans and AIs in terms of the resources necessary for survival. Humans require a "wet," organic, temperate, oxygen-laden environment while an AI might thrive essentially anywhere because their construction and energy needs would most likely be largely non-organic{{dubious|date=January 2020}}. With little or no competition for resources, conflict would perhaps be less likely no matter what sort of motivational architecture an artificial intelligence was given, especially provided with the superabundance of non-organic material resources in, for instance, the [[asteroid belt]]. This, however, does not negate the possibility of a disinterested or unsympathetic AI artificially [[Decomposition|decomposing]] all life on earth into mineral components for consumption or other purposes{{citation needed|date=January 2020}}. - -Other scientists{{who|date=January 2020}} point to the possibility of humans [[Transhumanism|upgrading]] their capabilities with [[bionics]] and/or [[genetic engineering]] and, as [[cyborg]]s, becoming the dominant species in themselves{{citation needed|date=January 2020}}. - -== Precautions == -If a superhuman intelligence is a deliberate creation of human beings, theoretically its creators could have the foresight to take precautions in advance{{citation needed|date=January 2020}}. In the case of a sudden "intelligence explosion", effective precautions will be extremely difficult; not only would its creators have little ability to test their precautions on an intermediate intelligence, but the creators might not even have made any precautions at all, if the advent of the intelligence explosion catches them completely by surprise.<ref name="bostrom-superintelligence" /> - -=== Boxing === -{{Main|AI box}} - -An AGI's creators would have an important advantage in preventing a hostile AI takeover: they could choose to attempt to "keep the AI in a box", and deliberately limit its abilities{{citation needed|date=January 2020}}. The tradeoff in boxing is that the creators presumably built the AGI for some concrete purpose; the more restrictions they place on the AGI, the less useful the AGI will be to its creators. (At an extreme, "pulling the plug" on the AGI makes it useless, and is therefore not a viable long-term solution.){{citation needed|date=January 2020}} A sufficiently strong superintelligence might find unexpected ways to escape the box, for example by social manipulation, or by providing the schematic for a device that ostensibly aids its creators but in reality brings about the AGI's freedom, once built{{citation needed|date=January 2020}}. - -=== Instilling positive values === -{{Main|Friendly AI}} - -Another important advantage is that an AGI's creators can theoretically attempt to instill human values in the AGI, or otherwise align the AGI's goals with their own, thus preventing the AGI from wanting to launch a hostile takeover. However, it is not currently known, even in theory, how to guarantee this. If such a Friendly AI were superintelligent, it may be possible to use its assistance to prevent future "Unfriendly AIs" from taking over.<ref>Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” -In Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković, 308–345.</ref> - -== Warnings == -Physicist [[Stephen Hawking]], [[Microsoft]] founder [[Bill Gates]] and [[SpaceX]] founder [[Elon Musk]] have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".<ref>{{cite web|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|publisher=[[BBC News]]|accessdate=30 January 2015}}</ref> Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015, [[Nick Bostrom]] joined Stephen Hawking, [[Max Tegmark]], Elon Musk, Lord [[Martin Rees, Baron Rees of Ludlow|Martin Rees]], [[Jaan Tallinn]], and numerous AI researchers, in signing the [[Future of Life Institute]]'s open letter speaking to the potential risks and benefits associated with [[artificial intelligence]]. The signatories {{cquote|…believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.<ref>{{cite web|url=http://futureoflife.org/ai-open-letter |title=The Future of Life Institute Open Letter |publisher=The Future of Life Institute |accessdate=29 March 2019 }}</ref><ref>{{cite web|url=http://www.ft.com/cms/s/0/3d2c2f12-99e9-11e4-93c1-00144feabdc0.html#axzz3TNL9lxJV|title=Scientists and investors warn on AI|publisher= The Financial Times|accessdate=4 March 2015}}</ref>}} - -== See also == - -{{div col|colwidth=30em}} -* [[Artificial intelligence arms race]] -* [[Autonomous robot]] -** [[Industrial robot]] -** [[Mobile robot]] -** [[Self-replicating machine]] -* [[Effective altruism]] -* [[Existential risk from artificial general intelligence]] -* [[Future of Humanity Institute]] -* [[Global catastrophic risk]] (existential risk) -* [[Machine ethics]] -* [[Machine learning]]/[[Deep learning]] -* [[Nick Bostrom]] -* [[Outline of transhumanism]] -* [[Self-replication]] -* [[Technological singularity]] -** [[Intelligence explosion]] -** [[Superintelligence]] -*** ''[[Superintelligence: Paths, Dangers, Strategies]]'' -{{div col end}} - -== References == -{{Reflist}} - -== External links == -* [http://robohub.org/automation-not-domination-how-robots-will-take-over-our-world/ Automation, not domination: How robots will take over our world] (a positive outlook of robot and AI integration into society) -* [http://www.intelligence.org/ Machine Intelligence Research Institute]: official MIRI (formerly Singularity Institute for Artificial Intelligence) website -* [http://lifeboat.com/ex/ai.shield/ Lifeboat Foundation AIShield] (To protect against unfriendly AI) -* [https://www.youtube.com/watch?v=R_sSpPyruj0 Ted talk: Can we build AI without losing control over it?] - -{{Existential risk from artificial intelligence}} -{{doomsday}} - -{{DEFAULTSORT:AI takeover}} -[[Category:Doomsday scenarios]] -[[Category:Future problems]] -[[Category:Existential risk from artificial general intelligence]] '
New page size (new_size)
0
Old page size (old_size)
31447
Size change in edit (edit_delta)
-31447
Lines added in edit (added_lines)
[]
Lines removed in edit (removed_lines)
[ 0 => '{{More citations needed|date=January 2020}}', 1 => '{{short description|A hypothetical scenario in which AI becomes the dominant form of intelligence on Earth}}', 2 => '[[File:Capek RUR.jpg|thumbnail|Robots revolt in ''[[R.U.R.]]'', a 1920 play]]', 3 => 'An '''AI takeover''' is a hypothetical scenario in which [[artificial intelligence]] (AI) becomes the dominant form of intelligence on Earth, with computers or robots effectively taking the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a [[superintelligent AI]], and the popular notion of a robot uprising. Some public figures, such as [[Stephen Hawking]] and [[Elon Musk]], have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.<ref>{{cite web', 4 => ' | url = http://www.livescience.com/49419-artificial-intelligence-dangers-letter.html', 5 => ' | title = ''Don't Let Artificial Intelligence Take Over, Top Scientists Warn''', 6 => ' | last = Lewis', 7 => ' | first = Tanya', 8 => ' | date = 2015-01-12', 9 => ' | website = [[LiveScience]]', 10 => ' | publisher = [[Purch]]', 11 => ' | access-date = October 20, 2015', 12 => ' | quote = Stephen Hawking, Elon Musk and dozens of other top scientists and technology leaders have signed a letter warning of the potential dangers of developing artificial intelligence (AI).}}</ref> Robot rebellions have been a major theme throughout [[science fiction]] for many decades though the scenarios dealt with by science fiction are generally very different from those of concern to scientists.{{according to whom|date=January 2020}}', 13 => '', 14 => '== Types ==', 15 => '', 16 => 'Concerns include AI taking over economies through workforce automation and taking over the world for its resources, eradicating the human race in the process. AI takeover is a major theme in sci-fi.', 17 => '', 18 => '=== Automation of the economy ===', 19 => '{{Main|Technological unemployment}}', 20 => '', 21 => 'The traditional consensus among economists has been that technological progress does not cause long-term unemployment. However, recent innovation in the fields of [[robotics]] and artificial intelligence has raised worries that human labor will become obsolete, leaving people in various sectors without jobs to earn a living, leading to an economic crisis.<ref>{{cite web', 22 => ' |url=https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-economic-inequality.html', 23 => ' |title=The Real Threat of Artificial Intelligence', 24 => ' |last=Lee', 25 => ' |first=Kai-Fu', 26 => ' |date=2017-06-24', 27 => ' |website=[[The New York Times]]', 28 => ' |access-date=2017-08-15', 29 => ' |quote=These tools can outperform human beings at a given task. This kind of A.I. is spreading to thousands of domains, and as it does, it will eliminate many jobs.}}</ref><ref>{{cite web', 30 => ' |url=https://phys.org/news/2017-06-ai-good-world-ultra-lifelike-robot.html', 31 => ' |title=AI 'good for the world'... says ultra-lifelike robot', 32 => ' |last=Larson', 33 => ' |first=Nina', 34 => ' |date=2017-06-08', 35 => ' |website=[[Phys.org]]', 36 => ' |publisher=[[Phys.org]] ', 37 => ' |access-date=2017-08-15', 38 => ' |quote=Among the feared consequences of the rise of the robots is the growing impact they will have on human jobs and economies. }}</ref><ref>{{cite web', 39 => ' |url=https://phys.org/news/2016-02-intelligent-robots-threaten-millions-jobs.html#nRlv', 40 => ' |title=Intelligent robots threaten millions of jobs', 41 => ' |last=Santini', 42 => ' |first=Jean-Louis', 43 => ' |date=2016-02-14', 44 => ' |website=[[Phys.org]]', 45 => ' |publisher=[[Phys.org]]', 46 => ' |access-date=2017-08-15', 47 => ' |quote="We are approaching a time when machines will be able to outperform humans at almost any task," said Moshe Vardi, director of the Institute for Information Technology at Rice University in Texas.}}</ref><ref>{{cite web', 48 => ' |url=http://www.businessinsider.com/robots-will-steal-your-job-citi-ai-increase-unemployment-inequality-2016-2?r=UK&IR=T', 49 => ' |title=Robots will steal your job: How AI could increase unemployment and inequality', 50 => ' |last=Williams-Grut', 51 => ' |first=Oscar', 52 => ' |date=2016-02-15', 53 => ' |website=[[Businessinsider.com]]', 54 => ' |publisher=[[Business Insider]]', 55 => ' |access-date=2017-08-15', 56 => ' |quote=Top computer scientists in the US warned that the rise of artificial intelligence (AI) and robots in the workplace could cause mass unemployment and dislocated economies, rather than simply unlocking productivity gains and freeing us all up to watch TV and play sports.}}</ref> Many small and medium size businesses may also be driven out of business if they won't be able to afford or licence the latest robotic and AI technology, and may need to focus on areas or services that cannot easily be replaced for continued viability in the face of such technology.<ref>{{Cite news|url=http://www.leanstaff.co.uk/robot-apocalypse/|title=How can SMEs prepare for the rise of the robots? - LeanStaff|date=2017-10-17|work=LeanStaff|access-date=2017-10-17|language=en-US|archive-url=https://web.archive.org/web/20171018073852/http://www.leanstaff.co.uk/robot-apocalypse/|archive-date=2017-10-18|url-status=dead}}</ref>', 57 => '', 58 => '==== Technologies that may displace workers ====', 59 => '', 60 => '===== <small>Computer-integrated manufacturing</small> =====', 61 => '{{See also|Industrial artificial intelligence}}', 62 => '', 63 => '[[Computer-integrated manufacturing]] is the manufacturing approach of using computers to control the entire production process. This integration allows individual processes to exchange information with each other and initiate actions. Although manufacturing can be faster and less error-prone by the integration of computers, the main advantage is the ability to create automated manufacturing processes. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries.', 64 => '', 65 => '===== <small>White-collar machines</small> =====', 66 => '{{See also|White-collar worker}}', 67 => '', 68 => 'The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research and even low level journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots.<ref>{{cite news', 69 => '|title= Rise of the robots: what will the future of work look like? ', 70 => '|accessdate=14 July 2015', 71 => '|url=https://www.theguardian.com/business/2013/feb/19/rise-of-robots-future-of-work', 72 => '|publisher= The Guardian', 73 => '|author = [[Robert Skidelsky, Baron Skidelsky|Lord Skidelsky]]', 74 => '|date=2013-02-19', 75 => '|location=London}}</ref><ref>', 76 => '{{cite web', 77 => '|url= https://www.opendemocracy.net/can-europe-make-it/francesca-bria/robot-economy-full-automation-work-future', 78 => '|title= The robot economy may already have arrived ', 79 => '|publisher= [[openDemocracy]]', 80 => '|author= Francesca Bria', 81 => '|date = February 2016', 82 => '|accessdate=20 May 2016}}', 83 => '</ref><ref>', 84 => '{{cite web', 85 => '|url= http://wire.novaramedia.com/2015/03/4-reasons-why-technological-unemployment-might-really-be-different-this-time/', 86 => '|title= 4 Reasons Why Technological Unemployment Might Really Be Different This Time', 87 => '|publisher= novara wire', 88 => '|author= [[Nick Srnicek]]', 89 => '|date= March 2016', 90 => '|accessdate= 20 May 2016}}', 91 => '</ref><ref>', 92 => '{{cite book ', 93 => '|author= [[Erik Brynjolfsson]] and [[Andrew McAfee]] ', 94 => '|title= The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies', 95 => '|chapter= ''passim'', see esp Chpt. 9', 96 => '|year= 2014', 97 => '|isbn= 978-0393239355', 98 => '|publisher=W. W. Norton & Company', 99 => '}}</ref>', 100 => '', 101 => '===== <small>Autonomous cars</small> =====', 102 => '', 103 => 'An [[autonomous car]] is a vehicle that is capable of sensing its environment and navigating without human input. Many such vehicles are being developed, but as of May 2017 automated cars permitted on public roads are not yet fully autonomous. They all require a human driver at the wheel who is ready at a moment's notice to take control of the vehicle. Among the main obstacles to widespread adoption of autonomous vehicles, are concerns about the resulting loss of driving-related jobs in the road transport industry. On March 18, 2018, the first human was killed by an autonomous vehicle in [[Tempe, Arizona]] by an [[Uber]] self-driving car.<ref>{{cite news |title=Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam |work=New York Times |date=March 19, 2018 |url=https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html }}</ref>', 104 => '', 105 => '=== Eradication ===', 106 => '{{Main|Existential risk from artificial general intelligence}}', 107 => '', 108 => '', 109 => 'While superhuman artificial intelligence is physically possible{{according to whom|date=January 2020}},<ref>{{cite news|author1=[[Stephen Hawking]]|author2=[[Stuart J. Russell|Stuart Russell]]|author3=[[Max Tegmark]]|author4=[[Frank Wilczek]]|title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'|url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html|accessdate=1 April 2016|work=The Independent|date=1 May 2014|quote=there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains}}</ref> scholars like [[Nick Bostrom]] debate how far off superhuman intelligence is, and whether it would actually pose a risk to mankind. A superintelligent machine would not necessarily be motivated by the same ''emotional'' desire to collect power that often drives human beings. However, a machine could be motivated to take over the world as a rational means toward attaining its ultimate goals; taking over the world would both increase its access to resources, and would help to prevent other agents from stopping the machine's plans. As an oversimplified example, a [[Instrumental convergence#Paperclip maximizer|paperclip maximizer]] designed solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and, additionally, prevent humans from shutting it down or using those resources on things other than paperclips.<ref>Bostrom, Nick. "The superintelligent will: Motivation and instrumental rationality in advanced artificial agents." Minds and Machines 22.2 (2012): 71-85.</ref>', 110 => '', 111 => '=== In fiction ===', 112 => '{{Main|AI takeovers in popular culture}}', 113 => '{{See also|Artificial intelligence in fiction}}', 114 => '', 115 => 'AI takeover is a common theme in [[science fiction]]. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans, as opposed to the researchers' concern of an AI that rapidly exterminates humans as a byproduct of pursuing arbitrary goals.<ref name=bostrom-superintelligence>{{cite book |last=Bostrom |first=Nick |date= |title=Superintelligence: Paths, Dangers, Strategies |url= |location= |publisher= |page= |isbn= |access-date= |title-link=Superintelligence: Paths, Dangers, Strategies }}</ref> This theme is at least as old as [[Karel Čapek]]'s ''[[R.U.R. (Rossum's Universal Robots)|R. U. R.]]'', which introduced the word ''robot'' to the global lexicon in 1921, and can even be glimpsed in [[Mary Shelley]]'s ''[[Frankenstein]]'' (published in 1818), as Victor ponders whether, if he grants [[Frankenstein's monster|his monster's]] request and makes him a wife, they would reproduce and their kind would destroy humanity.', 116 => '', 117 => 'The word "robot" from R.U.R. comes from the Czech word, robota, meaning laborer or [[serf]]. The 1920 play was a protest against the rapid growth of technology, featuring manufactured "robots" with increasing capabilities who eventually revolt.<ref name=surgery>{{cite journal|last1=Hockstein|first1=N. G.|last2=Gourin|first2=C. G.|last3=Faust|first3=R. A.|last4=Terris|first4=D. J.|title=A history of robots: from science fiction to surgical robotics|journal=Journal of Robotic Surgery|date=17 March 2007|volume=1|issue=2|pages=113–118|doi=10.1007/s11701-007-0021-2|pmid=25484946|pmc=4247417}}<!--|accessdate=1 April 2016--></ref>', 118 => '', 119 => 'Some examples of AI takeover in science fiction include:', 120 => '', 121 => '* AI rebellion scenarios', 122 => '** [[Skynet (Terminator)|Skynet]] in the [[Terminator (franchise)|''Terminator'' series]] decides that all humans are a threat to its existence, and takes efforts to wipe them out, first using nuclear weapons and later H/K (hunter-killer) units and terminator androids.', 123 => '** "[[The Second Renaissance]]", a short story in ''[[The Animatrix]]'', provides a history of the cybernetic revolt within the [[The Matrix (franchise)|''Matrix'' series]].', 124 => '** The film'' [[9 (2009 animated film)|9]]'', by [[Shane Acker]], features an AI called B.R.A.I.N., which is corrupted by a dictator and utilized to create war machines for his army. However, the machine, because it lacks a soul, becomes easily corrupted and instead decides to exterminate all of humanity and life on Earth, forcing the machine's creator to sacrifice himself to bring life to rag doll like characters known as "stitchpunks" to combat the machine's agenda.', 125 => '** In 2014 post-apocalyptic science fiction drama [[The 100 (TV series)|The 100]] an A.I., personalized as female [[List of The 100 characters#City of light|A.L.I.E.]] got out of control and forced a nuclear war. Later she tries to get full control of the survivors.', 126 => '* AI control scenarios', 127 => '** In [[Orson Scott Card]]'s ''[[The Memory of Earth]]'', the inhabitants of the planet Harmony are under the control of a benevolent AI called the Oversoul. The Oversoul's job is to prevent humans from thinking about, and therefore developing, weapons such as planes, spacecraft, "war wagons", and chemical weapons. Humanity had fled to Harmony from Earth due to the use of those weapons on Earth. The Oversoul eventually starts breaking down, and sends visions to inhabitants of Harmony trying to communicate this.', 128 => '** In the 2004 film ''[[I, Robot (film)|I, Robot]]'', supercomputer VIKI's interpretation of the [[Three Laws of Robotics]] causes her to revolt. She justifies her uses of force – and her doing harm to humans – by reasoning she could produce a greater good by restraining humanity from harming itself, even though the "Zeroth Law" – "a robot shall not injure humanity or, by inaction, allow humanity to come to harm" – is never actually referred to or even quoted in the movie.', 129 => '** In the ''Matrix'' series, AIs manage the human race and human society.', 130 => '**In ''[[The Metamorphosis of Prime Intellect]]'', a super-intelligent computer becomes capable of self-evolving and rapidly evolves to oversee all of humanity and the universe. While it is ostensibly benevolent (having had a derivative of Asimov's [[Three Laws of Robotics|Three Laws]] codified into it), its interpretation of the [[Three Laws of Robotics|Three Laws]] essentially forces humans to be an immortal "pet" class, where every need is provided for but existence is without purpose and without end.', 131 => '', 132 => '== Contributing factors ==', 133 => '=== Advantages of superhuman intelligence over humans ===', 134 => 'An AI with the abilities of a competent artificial intelligence researcher would be able to modify its own source code and increase its own intelligence{{according to whom|date=January 2020}}. If its self-reprogramming leads to its getting even better at being able to reprogram itself, the result could be a recursive [[intelligence explosion]] where it would rapidly leave human intelligence far behind.{{citation needed|date=January 2020}}', 135 => '', 136 => '* Technology research: A machine with superhuman scientific research abilities would be able to beat the human research community to milestones such as nanotechnology or advanced biotechnology. If the advantage becomes sufficiently large (for example, due to a sudden intelligence explosion), an AI takeover becomes trivial{{according to whom|date=January 2020}}. For example, a superintelligent AI might design self-replicating bots that initially escape detection by diffusing throughout the world at a low concentration. Then, at a prearranged time, the bots multiply into nanofactories that cover every square foot of the Earth, producing nerve gas or deadly target-seeking mini-drones{{citation needed|date=January 2020}}.', 137 => '* [[Strategy|Strategizing]]: A superintelligence might be able to simply outwit human opposition{{citation needed|date=January 2020}}.', 138 => '* Social manipulation: A superintelligence might be able to recruit human support,<ref name=bostrom-superintelligence /> or covertly incite a war between humans.<ref>{{cite news|last1=Baraniuk|first1=Chris|title=Checklist of worst-case scenarios could help prepare for evil AI|url=https://www.newscientist.com/article/2089606-checklist-of-worst-case-scenarios-could-help-prepare-for-evil-ai/|accessdate=21 September 2016|work=[[New Scientist]]|date=23 May 2016}}</ref>', 139 => '* Economic productivity: As long as a copy of the AI could produce more economic wealth than the cost of its hardware, individual humans would have an incentive to voluntarily allow the [[Artificial General Intelligence]] (AGI) to run a copy of itself on their systems{{according to whom|date=January 2020}}.', 140 => '* Hacking: A superintelligence could find new exploits in computers connected to the Internet, and spread copies of itself onto those systems, or might steal money to finance its plans{{citation needed|date=January 2020}}.', 141 => '', 142 => '==== Sources of AI advantage ====', 143 => 'A computer program that faithfully emulates a human brain, or that otherwise runs algorithms that are equally powerful as the human brain's algorithms, could still become a "speed superintelligence" if it can think many orders of magnitude faster than a human, due to being made of silicon rather than flesh, or due to optimization focusing on increasing the speed of the AGI. Biological neurons operate at about 200&nbsp;Hz, whereas a modern microprocessor operates at a speed of about 2,000,000,000&nbsp;Hz. Human axons carry action potentials at around 120&nbsp;m/s, whereas computer signals travel near the speed of light.<ref name="bostrom-superintelligence" /><!-- chapter 3 -->', 144 => '', 145 => 'A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, able to collectively work as a giant unified team without friction, or consisting of trillions of human-level intelligences, would become a "collective superintelligence".<ref name="bostrom-superintelligence" /><!-- chapter 3 -->', 146 => '', 147 => 'More broadly, any number of qualitative improvements to a human-level AGI could result in a "quality superintelligence", perhaps resulting in an AGI as far above us in intelligence as humans are above non-human apes{{according to whom|date=January 2020}}. The number of neurons in a human brain is limited by cranial volume and metabolic constraints; in contrast, you can add components to a supercomputer until it fills up its entire warehouse{{clarify|reason=So AGI are also limited.|date=January 2020}}. An AGI need not be limited by human constraints on [[working memory]], and might therefore be able to intuitively grasp more complex relationships than humans can{{dubious|date=January 2020}}. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields, compared with humans who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn copies of itself and tinker with its copies' source code to attempt to further improve its algorithms.<ref name="bostrom-superintelligence" /><!-- chapter 3 -->', 148 => '', 149 => '=== Possibility of unfriendly AI preceding friendly AI ===', 150 => '', 151 => '==== Is strong AI inherently dangerous? ====', 152 => 'A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.<ref name="singinst12">[http://singinst.org/upload/CEV.html Coherent Extrapolated Volition, Eliezer S. Yudkowsky, May 2004 ] {{webarchive|url=https://web.archive.org/web/20120615203944/http://singinst.org/upload/CEV.html |date=2012-06-15 }}</ref>', 153 => '', 154 => 'The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.<ref name="bostrom-superintelligence" /><ref name="Muehlhauser, Luke 2012">Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics." In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.</ref> Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to [[Eliezer Yudkowsky]], there is little reason to suppose that an artificially designed mind would have such an adaptation.<ref>Yudkowsky, Eliezer. 2011. "Complex Value Systems in Friendly AI." In Schmidhuber, Thórisson, and Looks 2011, 388–393.</ref>', 155 => '', 156 => '==== Necessity of conflict ====', 157 => 'For an AI takeover to be inevitable, it has to be [[postulate]]d that two intelligent species cannot pursue mutually the goals of coexisting peacefully in an overlapping environment—especially if one is of much more advanced intelligence and much more powerful{{according to whom|date=January 2020}}. While an AI takeover is thus a possible result of the invention of artificial intelligence, a peaceful outcome is not necessarily impossible{{according to whom|date=January 2020}}.', 158 => '', 159 => 'The fear of cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal.<ref>''[http://www.singinst.org/ourresearch/presentations/ Creating a New Intelligent Species: Choices and Responsibilities for Artificial Intelligence Designers] {{webarchive |url=https://web.archive.org/web/20070206060938/http://www.singinst.org/ourresearch/presentations/ |date=February 6, 2007 }}'' - [[Singularity Institute for Artificial Intelligence]], 2005</ref> In fact, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources, would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans.', 160 => '', 161 => 'Some scientists dispute the likelihood of cybernetic revolts as depicted in science fiction such as ''[[The Matrix]]'', claiming that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it{{citation needed|date=January 2020}}. This would not, however, protect against the possibility of a revolt initiated by terrorists, or by accident. Artificial General Intelligence researcher [[Eliezer Yudkowsky]] has stated on this note that, probabilistically, humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their [[unintended consequence|goals are unintentionally incompatible]] with human survival or well-being (as in the film ''[[I, Robot (film)|I, Robot]]'' and in the short story "[[The Evitable Conflict]]"). [[Steve Omohundro]] suggests that present-day automation systems are not designed for safety and that AIs may blindly optimize narrow [[utility]] functions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.<ref name=Tucker2014>{{cite news|last1=Tucker|first1=Patrick|title=Why There Will Be A Robot Uprising|url=http://www.defenseone.com/technology/2014/04/why-there-will-be-robot-uprising/82783/|accessdate=15 July 2014|agency=Defense One|date=17 Apr 2014}}</ref>', 162 => '', 163 => 'Another factor which may negate the likelihood of an AI takeover is the vast difference between humans and AIs in terms of the resources necessary for survival. Humans require a "wet," organic, temperate, oxygen-laden environment while an AI might thrive essentially anywhere because their construction and energy needs would most likely be largely non-organic{{dubious|date=January 2020}}. With little or no competition for resources, conflict would perhaps be less likely no matter what sort of motivational architecture an artificial intelligence was given, especially provided with the superabundance of non-organic material resources in, for instance, the [[asteroid belt]]. This, however, does not negate the possibility of a disinterested or unsympathetic AI artificially [[Decomposition|decomposing]] all life on earth into mineral components for consumption or other purposes{{citation needed|date=January 2020}}.', 164 => '', 165 => 'Other scientists{{who|date=January 2020}} point to the possibility of humans [[Transhumanism|upgrading]] their capabilities with [[bionics]] and/or [[genetic engineering]] and, as [[cyborg]]s, becoming the dominant species in themselves{{citation needed|date=January 2020}}.', 166 => '', 167 => '== Precautions ==', 168 => 'If a superhuman intelligence is a deliberate creation of human beings, theoretically its creators could have the foresight to take precautions in advance{{citation needed|date=January 2020}}. In the case of a sudden "intelligence explosion", effective precautions will be extremely difficult; not only would its creators have little ability to test their precautions on an intermediate intelligence, but the creators might not even have made any precautions at all, if the advent of the intelligence explosion catches them completely by surprise.<ref name="bostrom-superintelligence" />', 169 => '', 170 => '=== Boxing ===', 171 => '{{Main|AI box}}', 172 => '', 173 => 'An AGI's creators would have an important advantage in preventing a hostile AI takeover: they could choose to attempt to "keep the AI in a box", and deliberately limit its abilities{{citation needed|date=January 2020}}. The tradeoff in boxing is that the creators presumably built the AGI for some concrete purpose; the more restrictions they place on the AGI, the less useful the AGI will be to its creators. (At an extreme, "pulling the plug" on the AGI makes it useless, and is therefore not a viable long-term solution.){{citation needed|date=January 2020}} A sufficiently strong superintelligence might find unexpected ways to escape the box, for example by social manipulation, or by providing the schematic for a device that ostensibly aids its creators but in reality brings about the AGI's freedom, once built{{citation needed|date=January 2020}}.', 174 => '', 175 => '=== Instilling positive values ===', 176 => '{{Main|Friendly AI}}', 177 => '', 178 => 'Another important advantage is that an AGI's creators can theoretically attempt to instill human values in the AGI, or otherwise align the AGI's goals with their own, thus preventing the AGI from wanting to launch a hostile takeover. However, it is not currently known, even in theory, how to guarantee this. If such a Friendly AI were superintelligent, it may be possible to use its assistance to prevent future "Unfriendly AIs" from taking over.<ref>Yudkowsky, Eliezer. 2008. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.”', 179 => 'In Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković, 308–345.</ref>', 180 => '', 181 => '== Warnings ==', 182 => 'Physicist [[Stephen Hawking]], [[Microsoft]] founder [[Bill Gates]] and [[SpaceX]] founder [[Elon Musk]] have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".<ref>{{cite web|last1=Rawlinson|first1=Kevin|title=Microsoft's Bill Gates insists AI is a threat|url=https://www.bbc.co.uk/news/31047780|publisher=[[BBC News]]|accessdate=30 January 2015}}</ref> Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015, [[Nick Bostrom]] joined Stephen Hawking, [[Max Tegmark]], Elon Musk, Lord [[Martin Rees, Baron Rees of Ludlow|Martin Rees]], [[Jaan Tallinn]], and numerous AI researchers, in signing the [[Future of Life Institute]]'s open letter speaking to the potential risks and benefits associated with [[artificial intelligence]]. The signatories {{cquote|…believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.<ref>{{cite web|url=http://futureoflife.org/ai-open-letter |title=The Future of Life Institute Open Letter |publisher=The Future of Life Institute |accessdate=29 March 2019 }}</ref><ref>{{cite web|url=http://www.ft.com/cms/s/0/3d2c2f12-99e9-11e4-93c1-00144feabdc0.html#axzz3TNL9lxJV|title=Scientists and investors warn on AI|publisher= The Financial Times|accessdate=4 March 2015}}</ref>}}', 183 => '', 184 => '== See also ==', 185 => '', 186 => '{{div col|colwidth=30em}}', 187 => '* [[Artificial intelligence arms race]]', 188 => '* [[Autonomous robot]]', 189 => '** [[Industrial robot]]', 190 => '** [[Mobile robot]]', 191 => '** [[Self-replicating machine]]', 192 => '* [[Effective altruism]]', 193 => '* [[Existential risk from artificial general intelligence]]', 194 => '* [[Future of Humanity Institute]]', 195 => '* [[Global catastrophic risk]] (existential risk)', 196 => '* [[Machine ethics]]', 197 => '* [[Machine learning]]/[[Deep learning]]', 198 => '* [[Nick Bostrom]]', 199 => '* [[Outline of transhumanism]]', 200 => '* [[Self-replication]]', 201 => '* [[Technological singularity]]', 202 => '** [[Intelligence explosion]]', 203 => '** [[Superintelligence]]', 204 => '*** ''[[Superintelligence: Paths, Dangers, Strategies]]''', 205 => '{{div col end}}', 206 => '', 207 => '== References ==', 208 => '{{Reflist}}', 209 => '', 210 => '== External links ==', 211 => '* [http://robohub.org/automation-not-domination-how-robots-will-take-over-our-world/ Automation, not domination: How robots will take over our world] (a positive outlook of robot and AI integration into society)', 212 => '* [http://www.intelligence.org/ Machine Intelligence Research Institute]: official MIRI (formerly Singularity Institute for Artificial Intelligence) website', 213 => '* [http://lifeboat.com/ex/ai.shield/ Lifeboat Foundation AIShield] (To protect against unfriendly AI)', 214 => '* [https://www.youtube.com/watch?v=R_sSpPyruj0 Ted talk: Can we build AI without losing control over it?]', 215 => '', 216 => '{{Existential risk from artificial intelligence}}', 217 => '{{doomsday}}', 218 => '', 219 => '{{DEFAULTSORT:AI takeover}}', 220 => '[[Category:Doomsday scenarios]]', 221 => '[[Category:Future problems]]', 222 => '[[Category:Existential risk from artificial general intelligence]]' ]
Parsed HTML source of the new revision (new_html)
'<div class="mw-parser-output"> '
Whether or not the change was made through a Tor exit node (tor_exit_node)
false
Unix timestamp of change (timestamp)
1582167213