Examine individual changes
Appearance
This page allows you to examine the variables generated by the Edit Filter for an individual change.
Variables generated for this change
Variable | Value |
---|---|
Edit count of the user (user_editcount ) | null |
Name of the user account (user_name ) | '175.35.238.144' |
Age of the user account (user_age ) | 0 |
Groups (including implicit) the user is in (user_groups ) | [
0 => '*'
] |
Rights that the user has (user_rights ) | [
0 => 'createaccount',
1 => 'read',
2 => 'edit',
3 => 'createtalk',
4 => 'writeapi',
5 => 'viewmywatchlist',
6 => 'editmywatchlist',
7 => 'viewmyprivateinfo',
8 => 'editmyprivateinfo',
9 => 'editmyoptions',
10 => 'abusefilter-log-detail',
11 => 'urlshortener-create-url',
12 => 'centralauth-merge',
13 => 'abusefilter-view',
14 => 'abusefilter-log',
15 => 'vipsscaler-test'
] |
Whether the user is editing from mobile app (user_app ) | false |
Whether or not a user is editing through the mobile interface (user_mobile ) | true |
Page ID (page_id ) | 813176 |
Page namespace (page_namespace ) | 0 |
Page title without namespace (page_title ) | 'AI takeover' |
Full page title (page_prefixedtitle ) | 'AI takeover' |
Edit protection level of the page (page_restrictions_edit ) | [] |
Last ten users to contribute to the page (page_recent_contributors ) | [
0 => 'AnomieBOT',
1 => 'Moorlock',
2 => '109.250.49.249',
3 => 'Omlakhchander122',
4 => 'HMSLavender',
5 => '50.46.228.168',
6 => 'Materialscientist',
7 => 'Adtv news',
8 => 'Yello231',
9 => 'Citation bot'
] |
Page age in seconds (page_age ) | 592635267 |
Action (action ) | 'edit' |
Edit summary/reason (summary ) | '/* Possibility of unfriendly AI preceding friendly AI */added content
' |
Old content model (old_content_model ) | 'wikitext' |
New content model (new_content_model ) | 'wikitext' |
Old page wikitext, before the edit (old_wikitext ) | '{{Short description|Hypothetical artificial intelligence scenario}}
[[File:Capek RUR.jpg|thumbnail|Robots revolt in ''[[R.U.R.]]'', a 1920 play]]
{{Artificial intelligence}}
An '''AI takeover''' is a hypothetical scenario in which an [[artificial intelligence]] (AI) becomes the dominant form of intelligence on Earth, as [[computer program]]s or [[robot]]s effectively take the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a [[superintelligent AI]], and the popular notion of a '''robot uprising'''. Stories of AI takeovers [[AI takeovers in popular culture|are very popular]] throughout [[science-fiction]]. Some public figures, such as [[Stephen Hawking]] and [[Elon Musk]], have advocated research into [[AI control problem|precautionary measures]] to ensure future superintelligent machines remain under human control.<ref>{{Cite web |last=Lewis |first=Tanya |date=2015-01-12 |title=''Don't Let Artificial Intelligence Take Over, Top Scientists Warn'' |url=http://www.livescience.com/49419-artificial-intelligence-dangers-letter.html |access-date=October 20, 2015 |website=[[LiveScience]] |publisher=[[Purch]] |quote=Stephen Hawking, Elon Musk and dozens of other top scientists and technology leaders have signed a letter warning of the potential dangers of developing artificial intelligence (AI). |archive-date=2018-03-08 |archive-url=https://web.archive.org/web/20180308100411/https://www.livescience.com/49419-artificial-intelligence-dangers-letter.html |url-status=live }}</ref>
== Types ==
=== Automation of the economy ===
{{Main|Technological unemployment}}
The traditional consensus among economists has been that technological progress does not cause long-term unemployment. However, recent innovation in the fields of [[robotics]] and artificial intelligence has raised worries that human labor will become obsolete, leaving people in various sectors without jobs to earn a living, leading to an economic crisis.<ref>{{Cite web |last=Lee |first=Kai-Fu |date=2017-06-24 |title=The Real Threat of Artificial Intelligence |url=https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-economic-inequality.html |access-date=2017-08-15 |website=[[The New York Times]] |quote=These tools can outperform human beings at a given task. This kind of A.I. is spreading to thousands of domains, and as it does, it will eliminate many jobs. |archive-date=2020-04-17 |archive-url=https://web.archive.org/web/20200417183307/https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-economic-inequality.html |url-status=live }}</ref><ref>{{Cite web |last=Larson |first=Nina |date=2017-06-08 |title=AI 'good for the world'... says ultra-lifelike robot |url=https://phys.org/news/2017-06-ai-good-world-ultra-lifelike-robot.html |access-date=2017-08-15 |website=[[Phys.org]] |quote=Among the feared consequences of the rise of the robots is the growing impact they will have on human jobs and economies. |archive-date=2020-03-06 |archive-url=https://web.archive.org/web/20200306021915/https://phys.org/news/2017-06-ai-good-world-ultra-lifelike-robot.html |url-status=live }}</ref><ref>{{Cite web |last=Santini |first=Jean-Louis |date=2016-02-14 |title=Intelligent robots threaten millions of jobs |url=https://phys.org/news/2016-02-intelligent-robots-threaten-millions-jobs.html#nRlv |access-date=2017-08-15 |website=[[Phys.org]] |quote="We are approaching a time when machines will be able to outperform humans at almost any task," said Moshe Vardi, director of the Institute for Information Technology at Rice University in Texas. |archive-date=2019-01-01 |archive-url=https://web.archive.org/web/20190101014340/https://phys.org/news/2016-02-intelligent-robots-threaten-millions-jobs.html#nRlv |url-status=live }}</ref><ref>{{Cite web |last=Williams-Grut |first=Oscar |date=2016-02-15 |title=Robots will steal your job: How AI could increase unemployment and inequality |url=http://www.businessinsider.com/robots-will-steal-your-job-citi-ai-increase-unemployment-inequality-2016-2?r=UK&IR=T |access-date=2017-08-15 |website=[[Businessinsider.com]] |publisher=[[Business Insider]] |quote=Top computer scientists in the US warned that the rise of artificial intelligence (AI) and robots in the workplace could cause mass unemployment and dislocated economies, rather than simply unlocking productivity gains and freeing us all up to watch TV and play sports. |archive-date=2017-08-16 |archive-url=https://web.archive.org/web/20170816061548/http://www.businessinsider.com/robots-will-steal-your-job-citi-ai-increase-unemployment-inequality-2016-2?r=UK&IR=T |url-status=live }}</ref> Many small and medium size businesses may also be driven out of business if they cannot afford or licence the latest robotic and AI technology, and may need to focus on areas or services that cannot easily be replaced for continued viability in the face of such technology.<ref>{{Cite news |date=2017-10-17 |title=How can SMEs prepare for the rise of the robots? |language=en-US |work=LeanStaff |url=http://www.leanstaff.co.uk/robot-apocalypse/ |url-status=dead |access-date=2017-10-17 |archive-url=https://web.archive.org/web/20171018073852/http://www.leanstaff.co.uk/robot-apocalypse/ |archive-date=2017-10-18}}</ref>
==== Technologies that may displace workers ====
AI technologies have been widely adopted in recent years. While these technologies have replaced many traditional workers, they also create new opportunities. Industries that are most susceptible to AI takeover include transportation, retail, and military. AI military technologies, for example, allow soldiers to work remotely without any risk of injury. Author Dave Bond argues that as AI technologies continue to develop and expand, the relationship between humans and robots will change; they will become closely integrated in several aspects of life. AI will likely displace some workers while creating opportunities for new jobs in other sectors, especially in fields where tasks are repeatable.<ref>{{Cite journal|last=Frank|first=Morgan|date=2019-03-25|title=Toward understanding the impact of artificial intelligence on labor|journal=Proceedings of the National Academy of Sciences of the United States of America|volume=116|issue=14|pages=6531–6539|doi=10.1073/pnas.1900949116|pmid=30910965|pmc=6452673|doi-access=free}}</ref><ref>{{Cite book|last=Bond|first=Dave|title=Artificial Intelligence|year=2017|pages=67–69}}</ref>
==== Computer-integrated manufacturing ====
{{See also|Artificial intelligence in industry}}
[[Computer-integrated manufacturing]] uses computers to control the production process. This allows individual processes to exchange information with each other and initiate actions. Although manufacturing can be faster and less error-prone by the integration of computers, the main advantage is the ability to create automated manufacturing processes. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries.
==== White-collar machines ====
{{See also|White-collar worker}}
The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research and {{clarify|text=low level|date=March 2023}} journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots.<ref>{{Cite news |last=Skidelsky |first=Robert |author-link=Robert Skidelsky, Baron Skidelsky |date=2013-02-19 |title=Rise of the robots: what will the future of work look like? |work=The Guardian |location=London |url=https://www.theguardian.com/business/2013/feb/19/rise-of-robots-future-of-work |access-date=14 July 2015 |archive-date=2019-04-03 |archive-url=https://web.archive.org/web/20190403203821/https://www.theguardian.com/business/2013/feb/19/rise-of-robots-future-of-work |url-status=live }}</ref><ref>{{Cite web |last=Bria |first=Francesca |date=February 2016 |title=The robot economy may already have arrived |url=https://www.opendemocracy.net/can-europe-make-it/francesca-bria/robot-economy-full-automation-work-future |access-date=20 May 2016 |publisher=[[openDemocracy]] |archive-date=17 May 2016 |archive-url=https://web.archive.org/web/20160517215840/https://www.opendemocracy.net/can-europe-make-it/francesca-bria/robot-economy-full-automation-work-future |url-status=dead }}</ref><ref>{{Cite web |last=Srnicek |first=Nick |author-link=Nick Srnicek |date=March 2016 |title=4 Reasons Why Technological Unemployment Might Really Be Different This Time |url=http://wire.novaramedia.com/2015/03/4-reasons-why-technological-unemployment-might-really-be-different-this-time/ |url-status=dead |archive-url=https://web.archive.org/web/20160625161447/http://wire.novaramedia.com/2015/03/4-reasons-why-technological-unemployment-might-really-be-different-this-time/ |archive-date=25 June 2016 |access-date=20 May 2016 |publisher=novara wire}}</ref><ref>{{Cite book |last1=Brynjolfsson |first1=Erik |title=The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies |last2=McAfee |first2=Andrew |publisher=W. W. Norton & Company |year=2014 |isbn=978-0393239355 |chapter=''passim'', see esp Chpt. 9}}</ref>
==== Autonomous cars ====
An [[Self-driving car|autonomous car]] is a vehicle that is capable of sensing its environment and navigating without human input. Many such vehicles are being developed, but as of May 2017 automated cars permitted on public roads are not yet fully autonomous. They all require a human driver at the wheel who a moment's notice can take control of the vehicle. Among the obstacles to widespread adoption of autonomous vehicles are concerns about the resulting loss of driving-related jobs in the road transport industry. On March 18, 2018, [[Death of Elaine Herzberg|the first human was killed]] by an autonomous vehicle in [[Tempe, Arizona]] by an [[Uber]] self-driving car.<ref>{{Cite news |last=Wakabayashi |first=Daisuke |date=March 19, 2018 |title=Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam |work=New York Times |url=https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html |access-date=March 23, 2018 |archive-date=April 21, 2020 |archive-url=https://web.archive.org/web/20200421221918/https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html |url-status=live }}</ref>
=== Eradication ===
{{Main|Existential risk from artificial general intelligence}}
Scientists such as [[Stephen Hawking]] are confident that superhuman artificial intelligence is physically possible, stating "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains".<ref>{{Cite news |last1=Hawking |first1=Stephen |author2-link=Stuart J. Russell |last2=Stuart Russell |last3=Max Tegmark |last4=Frank Wilczek |date=1 May 2014 |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?' |work=The Independent |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html |archive-url=https://web.archive.org/web/20151002023652/http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html |archive-date=2015-10-02 |url-access=limited |url-status=live |access-date=1 April 2016|author3-link=Max Tegmark |author4-link=Frank Wilczek }}</ref><ref>{{cite book | last1=Müller | first1=Vincent C. | author-link1=Vincent C. Müller | last2=Bostrom | first2=Nick | author-link2=Nick Bostrom | title=Fundamental Issues of Artificial Intelligence | chapter=Future Progress in Artificial Intelligence: A Survey of Expert Opinion | publisher=Springer | year=2016 | isbn=978-3-319-26483-7 | doi=10.1007/978-3-319-26485-1_33 | pages=555–572 | chapter-url=https://nickbostrom.com/papers/survey.pdf | quote=AI systems will... reach overall human ability... very likely (with 90% probability) by 2075. From reaching human ability, it will move on to superintelligence within 30 years (75%)... So, (most of the AI experts responding to the surveys) think that superintelligence is likely to come in a few decades... | access-date=2022-06-16 | archive-date=2022-05-31 | archive-url=https://web.archive.org/web/20220531142709/https://nickbostrom.com/papers/survey.pdf | url-status=live }}</ref> Scholars like [[Nick Bostrom]] debate how far off superhuman intelligence is, and whether it poses a risk to mankind. According to Bostrom, a superintelligent machine would not necessarily be motivated by the same ''emotional'' desire to collect power that often drives human beings but might rather treat power as a means toward attaining its ultimate goals; taking over the world would both increase its access to resources and help to prevent other agents from stopping the machine's plans. As an oversimplified example, a [[Instrumental convergence#Paperclip maximizer|paperclip maximizer]] designed solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and, additionally, prevent humans from shutting it down or using those resources on things other than paperclips.<ref>{{cite journal | last=Bostrom | first=Nick | title=The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents | journal=Minds and Machines | publisher=Springer | volume=22 | issue=2 | year=2012 | doi=10.1007/s11023-012-9281-3 | pages=71–85 | s2cid=254835485 | url=https://nickbostrom.com/superintelligentwill.pdf | access-date=2022-06-16 | archive-date=2022-07-09 | archive-url=https://web.archive.org/web/20220709032134/https://nickbostrom.com/superintelligentwill.pdf | url-status=live }}</ref>
== In fiction ==
{{Main|AI takeovers in popular culture}}
{{See also|Artificial intelligence in fiction|Self-replicating machines in fiction}}
AI takeover is a common theme in [[science fiction]]. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans, as opposed to the researchers' concern of an AI that rapidly exterminates humans as a byproduct of pursuing {{clarify|text=arbitrary|date=March 2023}} goals.<ref name="bostrom-superintelligence">{{Cite book |last=Bostrom |first=Nick |title=Superintelligence: Paths, Dangers, Strategies |title-link=Superintelligence: Paths, Dangers, Strategies}}</ref> The idea is seen in [[Karel Čapek]]'s ''[[R.U.R.]]'', which introduced the word ''robot'' in 1921,<ref>{{Cite news |date=22 April 2011 |title=The Origin Of The Word 'Robot' |work=[[Science Friday]] (public radio) |url=https://www.sciencefriday.com/segments/the-origin-of-the-word-robot/ |access-date=30 April 2020 |archive-date=14 March 2020 |archive-url=https://web.archive.org/web/20200314092540/https://www.sciencefriday.com/segments/the-origin-of-the-word-robot/ |url-status=live }}</ref> and can be glimpsed in [[Mary Shelley]]'s ''[[Frankenstein]]'' (published in 1818), as Victor ponders whether, if he grants [[Frankenstein's monster|his monster's]] request and makes him a wife, they would reproduce and their kind would destroy humanity.<ref>{{Cite news |last=Botkin-Kowacki |first=Eva |date=28 October 2016 |title=A female Frankenstein would lead to humanity's extinction, say scientists |work=Christian Science Monitor |url=https://www.csmonitor.com/Science/2016/1028/A-female-Frankenstein-would-lead-to-humanity-s-extinction-say-scientists |access-date=30 April 2020 |archive-date=26 February 2021 |archive-url=https://web.archive.org/web/20210226203855/https://www.csmonitor.com/Science/2016/1028/A-female-Frankenstein-would-lead-to-humanity-s-extinction-say-scientists |url-status=live }}</ref>
The word "robot" from ''R.U.R.'' comes from the Czech word, ''robota'', meaning laborer or [[serf]]. The 1920 play was a protest against the rapid growth of technology, featuring manufactured "robots" with increasing capabilities who eventually revolt.<ref name="surgery">{{Cite journal |last1=Hockstein |first1=N. G. |last2=Gourin |first2=C. G. |last3=Faust |first3=R. A. |last4=Terris |first4=D. J. |date=17 March 2007 |title=A history of robots: from science fiction to surgical robotics |journal=Journal of Robotic Surgery |volume=1 |issue=2 |pages=113–118 |doi=10.1007/s11701-007-0021-2 |pmc=4247417 |pmid=25484946}}</ref> [[HAL 9000]] (1968) and the original [[Terminator (character)|Terminator]] (1984) are two iconic examples of hostile AI in pop culture.<ref>{{Cite news |last=Hellmann |first=Melissa |date=21 September 2019 |title=AI 101: What is artificial intelligence and where is it going? |work=The Seattle Times |url=https://www.seattletimes.com/business/technology/ai-101-what-is-artificial-intelligence-and-where-is-it-going/ |access-date=30 April 2020 |archive-date=21 April 2020 |archive-url=https://web.archive.org/web/20200421232439/https://www.seattletimes.com/business/technology/ai-101-what-is-artificial-intelligence-and-where-is-it-going/ |url-status=live }}</ref>
== Contributing factors ==
=== Advantages of superhuman intelligence over humans ===
[[Nick Bostrom]] and others have expressed concern that an AI with the abilities of a competent artificial intelligence researcher would be able to modify its own source code and increase its own intelligence. If its self-reprogramming leads to its getting even better at being able to reprogram itself, the result could be a recursive [[intelligence explosion]] in which it would rapidly leave human intelligence far behind. Bostrom defines a superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", and enumerates some advantages a superintelligence would have if it chose to compete against humans:<ref name=bostrom-superintelligence/><!-- bostrom-superintelligence, Chapter 6: Cognitive Superpowers, Table 8 --><ref name="BabcockKrámar2019">{{Cite book |last1=Babcock |first1=James |title=Next-Generation Ethics |last2=Krámar |first2=János |last3=Yampolskiy |first3=Roman V. |year=2019 |isbn=9781108616188 |pages=90–112 |chapter=Guidelines for Artificial Intelligence Containment |doi=10.1017/9781108616188.008 |author-link3=Roman Yampolskiy |arxiv=1707.08476 |s2cid=22007028}}<!-- in Next-Generation Ethics: Engineering a Better Society, Cambridge University Press --></ref>
* Technology research: A machine with superhuman scientific research abilities would be able to beat the human research community to milestones such as nanotechnology or advanced biotechnology.
* [[Strategy|Strategizing]]: A superintelligence might be able to simply outwit human opposition.
* Social manipulation: A superintelligence might be able to recruit human support,<ref name=bostrom-superintelligence /> or covertly incite a war between humans.<ref>{{Cite news |last=Baraniuk |first=Chris |date=23 May 2016 |title=Checklist of worst-case scenarios could help prepare for evil AI |work=[[New Scientist]] |url=https://www.newscientist.com/article/2089606-checklist-of-worst-case-scenarios-could-help-prepare-for-evil-ai/ |access-date=21 September 2016 |archive-date=21 September 2016 |archive-url=https://web.archive.org/web/20160921061131/https://www.newscientist.com/article/2089606-checklist-of-worst-case-scenarios-could-help-prepare-for-evil-ai/ |url-status=live }}</ref>
* Economic productivity: As long as a copy of the AI could produce more economic wealth than the cost of its hardware, individual humans would have an incentive to voluntarily allow the [[Artificial General Intelligence]] (AGI) to run a copy of itself on their systems.
* Hacking: A superintelligence could find new exploits in computers connected to the Internet, and spread copies of itself onto those systems, or might steal money to finance its plans.
==== Sources of AI advantage ====
According to Bostrom, a computer program that faithfully emulates a human brain, or that runs algorithms that are as powerful as the human brain's algorithms, could still become a "speed superintelligence" if it can think many orders of magnitude faster than a human, due to being made of silicon rather than flesh, or due to optimization increasing the speed of the AGI. Biological neurons operate at about 200 Hz, whereas a modern microprocessor operates at a speed of about 2,000,000,000 Hz. Human axons carry action potentials at around 120 m/s, whereas computer signals travel near the speed of light.<ref name="bostrom-superintelligence" /><!-- chapter 3 -->
A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, able to collectively work as a giant unified team without friction, or consisting of trillions of human-level intelligences, would become a "collective superintelligence".<ref name="bostrom-superintelligence" /><!-- chapter 3 -->
More broadly, any number of qualitative improvements to a human-level AGI could result in a "quality superintelligence", perhaps resulting in an AGI as far above us in intelligence as humans are above non-human apes. The number of neurons in a human brain is limited by cranial volume and metabolic constraints, while the number of processors in a supercomputer can be indefinitely expanded. An AGI need not be limited by human constraints on [[working memory]], and might therefore be able to intuitively grasp more complex relationships than humans can. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields, compared with humans who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn copies of itself and tinker with its copies' source code to attempt to further improve its algorithms.<ref name="bostrom-superintelligence" /><!-- chapter 3 -->
=== Possibility of unfriendly AI preceding friendly AI ===
==== Is strong AI inherently dangerous? ====
{{main|AI alignment}}
A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not undergo [[instrumental convergence]] in ways that may automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.<ref name="singinst12">{{Cite web |last=Yudkowsky |first=Eliezer S. |date=May 2004 |title=Coherent Extrapolated Volition |url=http://singinst.org/upload/CEV.html |archive-url=https://web.archive.org/web/20120615203944/http://singinst.org/upload/CEV.html |archive-date=2012-06-15 |publisher=Singularity Institute for Artificial Intelligence}}</ref>
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.<ref name="bostrom-superintelligence" /><ref name="Muehlhauser, Luke 2012">{{Cite book |last1=Muehlhauser |first1=Luke |title=Singularity Hypotheses: A Scientific and Philosophical Assessment |last2=Helm |first2=Louie |publisher=Springer |year=2012 |chapter=Intelligence Explosion and Machine Ethics |chapter-url=https://intelligence.org/files/IE-ME.pdf |access-date=2020-10-02 |archive-date=2015-05-07 |archive-url=https://web.archive.org/web/20150507173028/http://intelligence.org/files/IE-ME.pdf |url-status=live }}</ref> Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to [[Eliezer Yudkowsky]], there is little reason to suppose that an artificially designed mind would have such an adaptation.<ref name="Yudkowsky2011">{{Cite book |last=Yudkowsky |first=Eliezer |title=Artificial General Intelligence |year=2011 |isbn=978-3-642-22886-5 |series=Lecture Notes in Computer Science |volume=6830 |pages=388–393 |chapter=Complex Value Systems in Friendly AI |doi=10.1007/978-3-642-22887-2_48 |issn=0302-9743}}</ref>
==== Odds of conflict ====
Many scholars, including evolutionary psychologist [[Steven Pinker]], argue that a superintelligent machine is likely to coexist peacefully with humans.<ref name="pinker now">{{Cite news |last=Pinker |first=Steven |date=13 February 2018 |title=We're told to fear robots. But why do we think they'll turn on us? |language=en |work=Popular Science |url=https://www.popsci.com/robot-uprising-enlightenment-now/ |access-date=8 June 2020 |archive-date=20 July 2020 |archive-url=https://web.archive.org/web/20200720164306/https://www.popsci.com/robot-uprising-enlightenment-now/ |url-status=live }}</ref>
The fear of cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal.<ref>''[http://www.singinst.org/ourresearch/presentations/ Creating a New Intelligent Species: Choices and Responsibilities for Artificial Intelligence Designers] {{webarchive |url=https://web.archive.org/web/20070206060938/http://www.singinst.org/ourresearch/presentations/ |date=February 6, 2007 }}'' - [[Singularity Institute for Artificial Intelligence]], 2005</ref> According to AI researcher [[Steve Omohundro]], an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources—would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans.<ref>{{Cite conference |last=Omohundro |first=Stephen M. |date=June 2008 |title=The basic AI drives |url=https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf |conference=Artificial General Intelligence 2008 |pages=483–492 |access-date=2020-10-02 |archive-date=2020-10-10 |archive-url=https://web.archive.org/web/20201010072132/https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf |url-status=live }}</ref>
Many scholars dispute the likelihood of unanticipated cybernetic revolt as depicted in science fiction such as ''[[The Matrix]]'', arguing that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it. Pinker acknowledges the possibility of deliberate "bad actors", but states that in the absence of bad actors, unanticipated accidents are not a significant threat; Pinker argues that a culture of engineering safety will prevent AI researchers from accidentally unleashing malign superintelligence.<ref name="pinker now" /> In contrast, Yudkowsky argues that humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their [[unintended consequence|goals are unintentionally incompatible]] with human survival or well-being (as in the film ''[[I, Robot (film)|I, Robot]]'' and in the short story "[[The Evitable Conflict]]"). Omohundro suggests that present-day automation systems are not [[AI safety|designed for safety]] and that AIs may blindly optimize narrow [[utility]] functions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.<ref name="Tucker2014">{{Cite news |last=Tucker |first=Patrick |date=17 Apr 2014 |title=Why There Will Be A Robot Uprising |agency=Defense One |url=http://www.defenseone.com/technology/2014/04/why-there-will-be-robot-uprising/82783/ |access-date=15 July 2014 |archive-date=6 July 2014 |archive-url=https://web.archive.org/web/20140706110100/http://www.defenseone.com/technology/2014/04/why-there-will-be-robot-uprising/82783/ |url-status=live }}</ref>
==== Precautions ====
{{main|AI control problem}}
The '''AI control problem''' is the issue of how to build a [[superintelligence|superintelligent]] agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators.<ref>{{Cite book|first=Stuart J.|last=Russell|url=http://worldcat.org/oclc/1237420037|title=Human compatible : artificial intelligence and the problem of control|date=8 October 2019|isbn=978-0-525-55862-0|oclc=1237420037|access-date=2 January 2022|archive-date=15 March 2023|archive-url=https://web.archive.org/web/20230315194123/https://worldcat.org/title/1237420037|url-status=live}}</ref> Some scholars argue that solutions to the control problem might also find applications in existing non-superintelligent AI.<ref name="bbc-google">{{Cite news |date=8 June 2016 |title=Google developing kill switch for AI |work=BBC News |url=https://www.bbc.com/news/technology-36472140 |access-date=7 June 2020 |archive-date=11 June 2016 |archive-url=https://web.archive.org/web/20160611042244/http://www.bbc.com/news/technology-36472140 |url-status=live }}</ref>
Major approaches to the control problem include ''alignment'', which aims to align AI goal systems with human values, and ''capability control'', which aims to reduce an AI system's capacity to harm humans or gain control. An example of "capability control" is to research whether a superintelligence AI could be successfully confined in an "[[AI box]]". According to Bostrom, such capability control proposals are not reliable or sufficient to solve the control problem in the long term, but may potentially act as valuable supplements to alignment efforts.<ref name=bostrom-superintelligence/>
== Warnings ==
Physicist [[Stephen Hawking]], [[Microsoft]] founder [[Bill Gates]], and [[SpaceX]] founder [[Elon Musk]] have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".<ref>{{Cite web |last=Rawlinson |first=Kevin |date=29 January 2015 |title=Microsoft's Bill Gates insists AI is a threat |url=https://www.bbc.co.uk/news/31047780 |access-date=30 January 2015 |website=[[BBC News]] |archive-date=29 January 2015 |archive-url=https://web.archive.org/web/20150129183607/http://www.bbc.co.uk/news/31047780 |url-status=live }}</ref> Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting [[Financial market|financial markets]], out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015, [[Nick Bostrom]] joined Stephen Hawking, [[Max Tegmark]], Elon Musk, Lord [[Martin Rees, Baron Rees of Ludlow|Martin Rees]], [[Jaan Tallinn]], and numerous AI researchers in signing the [[Future of Life Institute]]'s open letter speaking to the potential risks and benefits associated with [[artificial intelligence]]. The signatories "believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today."<ref>{{Cite web |title=The Future of Life Institute Open Letter |date=28 October 2015 |url=http://futureoflife.org/ai-open-letter |access-date=29 March 2019 |publisher=The Future of Life Institute |archive-date=29 March 2019 |archive-url=https://web.archive.org/web/20190329094536/https://futureoflife.org/ai-open-letter/ |url-status=live }}</ref><ref>{{Cite web |last=Bradshaw |first=Tim |date=11 January 2015 |title=Scientists and investors warn on AI |url=http://www.ft.com/cms/s/0/3d2c2f12-99e9-11e4-93c1-00144feabdc0.html#axzz3TNL9lxJV |access-date=4 March 2015 |publisher=The Financial Times |archive-date=7 February 2015 |archive-url=https://web.archive.org/web/20150207042806/http://www.ft.com/cms/s/0/3d2c2f12-99e9-11e4-93c1-00144feabdc0.html#axzz3TNL9lxJV |url-status=live }}</ref>
== Prevention through AI alignment ==
{{Excerpt|AI alignment|only=paragraph}}
== See also ==
{{div col|colwidth=30em}}
* [[Philosophy of artificial intelligence]]
* [[Artificial intelligence arms race]]
* [[Autonomous robot]]
** [[Industrial robot]]
** [[Mobile robot]]
** [[Self-replicating machine]]
* [[Cyberocracy]]
* [[Effective altruism]]
* [[Existential risk from artificial general intelligence]]
* [[Future of Humanity Institute]]
* [[Global catastrophic risk]] (existential risk)
* [[Government by algorithm]]
* [[Human extinction]]
* [[Machine ethics]]
* [[Machine learning]]/[[Deep learning]]
* [[Transhumanism]]
* [[Self-replication]]
* [[Technophobia]]
* [[Technological singularity]]
** [[Intelligence explosion]]
** [[Superintelligence]]
*** ''[[Superintelligence: Paths, Dangers, Strategies]]''
{{div col end}}
==Notes==
{{Notelist}}
== References ==
{{Reflist}}
== External links ==
* [https://robohub.org/automation-not-domination-how-robots-will-take-over-our-world/ Automation, not domination: How robots will take over our world] (a positive outlook of robot and AI integration into society)
* [https://intelligence.org/ Machine Intelligence Research Institute]: official MIRI (formerly Singularity Institute for Artificial Intelligence) website
* [https://lifeboat.com/ex/ai.shield/ Lifeboat Foundation AIShield] (To protect against unfriendly AI)
* [https://www.youtube.com/watch?v=R_sSpPyruj0 Ted talk: Can we build AI without losing control over it?]
{{Existential risk from artificial intelligence|state=expanded}}
{{doomsday}}
{{DEFAULTSORT:AI takeover}}
[[Category:Doomsday scenarios]]
[[Category:Future problems]]
[[Category:Existential risk from artificial general intelligence]]
[[Category:Technophobia]]' |
New page wikitext, after the edit (new_wikitext ) | '{{Short description|Hypothetical artificial intelligence scenario}}
[[File:Capek RUR.jpg|thumbnail|Robots revolt in ''[[R.U.R.]]'', a 1920 play]]
{{Artificial intelligence}}
An '''AI takeover''' is a hypothetical scenario in which an [[artificial intelligence]] (AI) becomes the dominant form of intelligence on Earth, as [[computer program]]s or [[robot]]s effectively take the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a [[superintelligent AI]], and the popular notion of a '''robot uprising'''. Stories of AI takeovers [[AI takeovers in popular culture|are very popular]] throughout [[science-fiction]]. Some public figures, such as [[Stephen Hawking]] and [[Elon Musk]], have advocated research into [[AI control problem|precautionary measures]] to ensure future superintelligent machines remain under human control.<ref>{{Cite web |last=Lewis |first=Tanya |date=2015-01-12 |title=''Don't Let Artificial Intelligence Take Over, Top Scientists Warn'' |url=http://www.livescience.com/49419-artificial-intelligence-dangers-letter.html |access-date=October 20, 2015 |website=[[LiveScience]] |publisher=[[Purch]] |quote=Stephen Hawking, Elon Musk and dozens of other top scientists and technology leaders have signed a letter warning of the potential dangers of developing artificial intelligence (AI). |archive-date=2018-03-08 |archive-url=https://web.archive.org/web/20180308100411/https://www.livescience.com/49419-artificial-intelligence-dangers-letter.html |url-status=live }}</ref>
== Types ==
=== Automation of the economy ===
{{Main|Technological unemployment}}
The traditional consensus among economists has been that technological progress does not cause long-term unemployment. However, recent innovation in the fields of [[robotics]] and artificial intelligence has raised worries that human labor will become obsolete, leaving people in various sectors without jobs to earn a living, leading to an economic crisis.<ref>{{Cite web |last=Lee |first=Kai-Fu |date=2017-06-24 |title=The Real Threat of Artificial Intelligence |url=https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-economic-inequality.html |access-date=2017-08-15 |website=[[The New York Times]] |quote=These tools can outperform human beings at a given task. This kind of A.I. is spreading to thousands of domains, and as it does, it will eliminate many jobs. |archive-date=2020-04-17 |archive-url=https://web.archive.org/web/20200417183307/https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-economic-inequality.html |url-status=live }}</ref><ref>{{Cite web |last=Larson |first=Nina |date=2017-06-08 |title=AI 'good for the world'... says ultra-lifelike robot |url=https://phys.org/news/2017-06-ai-good-world-ultra-lifelike-robot.html |access-date=2017-08-15 |website=[[Phys.org]] |quote=Among the feared consequences of the rise of the robots is the growing impact they will have on human jobs and economies. |archive-date=2020-03-06 |archive-url=https://web.archive.org/web/20200306021915/https://phys.org/news/2017-06-ai-good-world-ultra-lifelike-robot.html |url-status=live }}</ref><ref>{{Cite web |last=Santini |first=Jean-Louis |date=2016-02-14 |title=Intelligent robots threaten millions of jobs |url=https://phys.org/news/2016-02-intelligent-robots-threaten-millions-jobs.html#nRlv |access-date=2017-08-15 |website=[[Phys.org]] |quote="We are approaching a time when machines will be able to outperform humans at almost any task," said Moshe Vardi, director of the Institute for Information Technology at Rice University in Texas. |archive-date=2019-01-01 |archive-url=https://web.archive.org/web/20190101014340/https://phys.org/news/2016-02-intelligent-robots-threaten-millions-jobs.html#nRlv |url-status=live }}</ref><ref>{{Cite web |last=Williams-Grut |first=Oscar |date=2016-02-15 |title=Robots will steal your job: How AI could increase unemployment and inequality |url=http://www.businessinsider.com/robots-will-steal-your-job-citi-ai-increase-unemployment-inequality-2016-2?r=UK&IR=T |access-date=2017-08-15 |website=[[Businessinsider.com]] |publisher=[[Business Insider]] |quote=Top computer scientists in the US warned that the rise of artificial intelligence (AI) and robots in the workplace could cause mass unemployment and dislocated economies, rather than simply unlocking productivity gains and freeing us all up to watch TV and play sports. |archive-date=2017-08-16 |archive-url=https://web.archive.org/web/20170816061548/http://www.businessinsider.com/robots-will-steal-your-job-citi-ai-increase-unemployment-inequality-2016-2?r=UK&IR=T |url-status=live }}</ref> Many small and medium size businesses may also be driven out of business if they cannot afford or licence the latest robotic and AI technology, and may need to focus on areas or services that cannot easily be replaced for continued viability in the face of such technology.<ref>{{Cite news |date=2017-10-17 |title=How can SMEs prepare for the rise of the robots? |language=en-US |work=LeanStaff |url=http://www.leanstaff.co.uk/robot-apocalypse/ |url-status=dead |access-date=2017-10-17 |archive-url=https://web.archive.org/web/20171018073852/http://www.leanstaff.co.uk/robot-apocalypse/ |archive-date=2017-10-18}}</ref>
==== Technologies that may displace workers ====
AI technologies have been widely adopted in recent years. While these technologies have replaced many traditional workers, they also create new opportunities. Industries that are most susceptible to AI takeover include transportation, retail, and military. AI military technologies, for example, allow soldiers to work remotely without any risk of injury. Author Dave Bond argues that as AI technologies continue to develop and expand, the relationship between humans and robots will change; they will become closely integrated in several aspects of life. AI will likely displace some workers while creating opportunities for new jobs in other sectors, especially in fields where tasks are repeatable.<ref>{{Cite journal|last=Frank|first=Morgan|date=2019-03-25|title=Toward understanding the impact of artificial intelligence on labor|journal=Proceedings of the National Academy of Sciences of the United States of America|volume=116|issue=14|pages=6531–6539|doi=10.1073/pnas.1900949116|pmid=30910965|pmc=6452673|doi-access=free}}</ref><ref>{{Cite book|last=Bond|first=Dave|title=Artificial Intelligence|year=2017|pages=67–69}}</ref>
==== Computer-integrated manufacturing ====
{{See also|Artificial intelligence in industry}}
[[Computer-integrated manufacturing]] uses computers to control the production process. This allows individual processes to exchange information with each other and initiate actions. Although manufacturing can be faster and less error-prone by the integration of computers, the main advantage is the ability to create automated manufacturing processes. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries.
==== White-collar machines ====
{{See also|White-collar worker}}
The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research and {{clarify|text=low level|date=March 2023}} journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots.<ref>{{Cite news |last=Skidelsky |first=Robert |author-link=Robert Skidelsky, Baron Skidelsky |date=2013-02-19 |title=Rise of the robots: what will the future of work look like? |work=The Guardian |location=London |url=https://www.theguardian.com/business/2013/feb/19/rise-of-robots-future-of-work |access-date=14 July 2015 |archive-date=2019-04-03 |archive-url=https://web.archive.org/web/20190403203821/https://www.theguardian.com/business/2013/feb/19/rise-of-robots-future-of-work |url-status=live }}</ref><ref>{{Cite web |last=Bria |first=Francesca |date=February 2016 |title=The robot economy may already have arrived |url=https://www.opendemocracy.net/can-europe-make-it/francesca-bria/robot-economy-full-automation-work-future |access-date=20 May 2016 |publisher=[[openDemocracy]] |archive-date=17 May 2016 |archive-url=https://web.archive.org/web/20160517215840/https://www.opendemocracy.net/can-europe-make-it/francesca-bria/robot-economy-full-automation-work-future |url-status=dead }}</ref><ref>{{Cite web |last=Srnicek |first=Nick |author-link=Nick Srnicek |date=March 2016 |title=4 Reasons Why Technological Unemployment Might Really Be Different This Time |url=http://wire.novaramedia.com/2015/03/4-reasons-why-technological-unemployment-might-really-be-different-this-time/ |url-status=dead |archive-url=https://web.archive.org/web/20160625161447/http://wire.novaramedia.com/2015/03/4-reasons-why-technological-unemployment-might-really-be-different-this-time/ |archive-date=25 June 2016 |access-date=20 May 2016 |publisher=novara wire}}</ref><ref>{{Cite book |last1=Brynjolfsson |first1=Erik |title=The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies |last2=McAfee |first2=Andrew |publisher=W. W. Norton & Company |year=2014 |isbn=978-0393239355 |chapter=''passim'', see esp Chpt. 9}}</ref>
==== Autonomous cars ====
An [[Self-driving car|autonomous car]] is a vehicle that is capable of sensing its environment and navigating without human input. Many such vehicles are being developed, but as of May 2017 automated cars permitted on public roads are not yet fully autonomous. They all require a human driver at the wheel who a moment's notice can take control of the vehicle. Among the obstacles to widespread adoption of autonomous vehicles are concerns about the resulting loss of driving-related jobs in the road transport industry. On March 18, 2018, [[Death of Elaine Herzberg|the first human was killed]] by an autonomous vehicle in [[Tempe, Arizona]] by an [[Uber]] self-driving car.<ref>{{Cite news |last=Wakabayashi |first=Daisuke |date=March 19, 2018 |title=Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam |work=New York Times |url=https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html |access-date=March 23, 2018 |archive-date=April 21, 2020 |archive-url=https://web.archive.org/web/20200421221918/https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html |url-status=live }}</ref>
=== Eradication ===
{{Main|Existential risk from artificial general intelligence}}
Scientists such as [[Stephen Hawking]] are confident that superhuman artificial intelligence is physically possible, stating "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains".<ref>{{Cite news |last1=Hawking |first1=Stephen |author2-link=Stuart J. Russell |last2=Stuart Russell |last3=Max Tegmark |last4=Frank Wilczek |date=1 May 2014 |title=Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?' |work=The Independent |url=https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html |archive-url=https://web.archive.org/web/20151002023652/http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html |archive-date=2015-10-02 |url-access=limited |url-status=live |access-date=1 April 2016|author3-link=Max Tegmark |author4-link=Frank Wilczek }}</ref><ref>{{cite book | last1=Müller | first1=Vincent C. | author-link1=Vincent C. Müller | last2=Bostrom | first2=Nick | author-link2=Nick Bostrom | title=Fundamental Issues of Artificial Intelligence | chapter=Future Progress in Artificial Intelligence: A Survey of Expert Opinion | publisher=Springer | year=2016 | isbn=978-3-319-26483-7 | doi=10.1007/978-3-319-26485-1_33 | pages=555–572 | chapter-url=https://nickbostrom.com/papers/survey.pdf | quote=AI systems will... reach overall human ability... very likely (with 90% probability) by 2075. From reaching human ability, it will move on to superintelligence within 30 years (75%)... So, (most of the AI experts responding to the surveys) think that superintelligence is likely to come in a few decades... | access-date=2022-06-16 | archive-date=2022-05-31 | archive-url=https://web.archive.org/web/20220531142709/https://nickbostrom.com/papers/survey.pdf | url-status=live }}</ref> Scholars like [[Nick Bostrom]] debate how far off superhuman intelligence is, and whether it poses a risk to mankind. According to Bostrom, a superintelligent machine would not necessarily be motivated by the same ''emotional'' desire to collect power that often drives human beings but might rather treat power as a means toward attaining its ultimate goals; taking over the world would both increase its access to resources and help to prevent other agents from stopping the machine's plans. As an oversimplified example, a [[Instrumental convergence#Paperclip maximizer|paperclip maximizer]] designed solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and, additionally, prevent humans from shutting it down or using those resources on things other than paperclips.<ref>{{cite journal | last=Bostrom | first=Nick | title=The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents | journal=Minds and Machines | publisher=Springer | volume=22 | issue=2 | year=2012 | doi=10.1007/s11023-012-9281-3 | pages=71–85 | s2cid=254835485 | url=https://nickbostrom.com/superintelligentwill.pdf | access-date=2022-06-16 | archive-date=2022-07-09 | archive-url=https://web.archive.org/web/20220709032134/https://nickbostrom.com/superintelligentwill.pdf | url-status=live }}</ref>
== In fiction ==
{{Main|AI takeovers in popular culture}}
{{See also|Artificial intelligence in fiction|Self-replicating machines in fiction}}
AI takeover is a common theme in [[science fiction]]. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans, as opposed to the researchers' concern of an AI that rapidly exterminates humans as a byproduct of pursuing {{clarify|text=arbitrary|date=March 2023}} goals.<ref name="bostrom-superintelligence">{{Cite book |last=Bostrom |first=Nick |title=Superintelligence: Paths, Dangers, Strategies |title-link=Superintelligence: Paths, Dangers, Strategies}}</ref> The idea is seen in [[Karel Čapek]]'s ''[[R.U.R.]]'', which introduced the word ''robot'' in 1921,<ref>{{Cite news |date=22 April 2011 |title=The Origin Of The Word 'Robot' |work=[[Science Friday]] (public radio) |url=https://www.sciencefriday.com/segments/the-origin-of-the-word-robot/ |access-date=30 April 2020 |archive-date=14 March 2020 |archive-url=https://web.archive.org/web/20200314092540/https://www.sciencefriday.com/segments/the-origin-of-the-word-robot/ |url-status=live }}</ref> and can be glimpsed in [[Mary Shelley]]'s ''[[Frankenstein]]'' (published in 1818), as Victor ponders whether, if he grants [[Frankenstein's monster|his monster's]] request and makes him a wife, they would reproduce and their kind would destroy humanity.<ref>{{Cite news |last=Botkin-Kowacki |first=Eva |date=28 October 2016 |title=A female Frankenstein would lead to humanity's extinction, say scientists |work=Christian Science Monitor |url=https://www.csmonitor.com/Science/2016/1028/A-female-Frankenstein-would-lead-to-humanity-s-extinction-say-scientists |access-date=30 April 2020 |archive-date=26 February 2021 |archive-url=https://web.archive.org/web/20210226203855/https://www.csmonitor.com/Science/2016/1028/A-female-Frankenstein-would-lead-to-humanity-s-extinction-say-scientists |url-status=live }}</ref>
The word "robot" from ''R.U.R.'' comes from the Czech word, ''robota'', meaning laborer or [[serf]]. The 1920 play was a protest against the rapid growth of technology, featuring manufactured "robots" with increasing capabilities who eventually revolt.<ref name="surgery">{{Cite journal |last1=Hockstein |first1=N. G. |last2=Gourin |first2=C. G. |last3=Faust |first3=R. A. |last4=Terris |first4=D. J. |date=17 March 2007 |title=A history of robots: from science fiction to surgical robotics |journal=Journal of Robotic Surgery |volume=1 |issue=2 |pages=113–118 |doi=10.1007/s11701-007-0021-2 |pmc=4247417 |pmid=25484946}}</ref> [[HAL 9000]] (1968) and the original [[Terminator (character)|Terminator]] (1984) are two iconic examples of hostile AI in pop culture.<ref>{{Cite news |last=Hellmann |first=Melissa |date=21 September 2019 |title=AI 101: What is artificial intelligence and where is it going? |work=The Seattle Times |url=https://www.seattletimes.com/business/technology/ai-101-what-is-artificial-intelligence-and-where-is-it-going/ |access-date=30 April 2020 |archive-date=21 April 2020 |archive-url=https://web.archive.org/web/20200421232439/https://www.seattletimes.com/business/technology/ai-101-what-is-artificial-intelligence-and-where-is-it-going/ |url-status=live }}</ref>
== Contributing factors ==
=== Advantages of superhuman intelligence over humans ===
[[Nick Bostrom]] and others have expressed concern that an AI with the abilities of a competent artificial intelligence researcher would be able to modify its own source code and increase its own intelligence. If its self-reprogramming leads to its getting even better at being able to reprogram itself, the result could be a recursive [[intelligence explosion]] in which it would rapidly leave human intelligence far behind. Bostrom defines a superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", and enumerates some advantages a superintelligence would have if it chose to compete against humans:<ref name=bostrom-superintelligence/><!-- bostrom-superintelligence, Chapter 6: Cognitive Superpowers, Table 8 --><ref name="BabcockKrámar2019">{{Cite book |last1=Babcock |first1=James |title=Next-Generation Ethics |last2=Krámar |first2=János |last3=Yampolskiy |first3=Roman V. |year=2019 |isbn=9781108616188 |pages=90–112 |chapter=Guidelines for Artificial Intelligence Containment |doi=10.1017/9781108616188.008 |author-link3=Roman Yampolskiy |arxiv=1707.08476 |s2cid=22007028}}<!-- in Next-Generation Ethics: Engineering a Better Society, Cambridge University Press --></ref>
* Technology research: A machine with superhuman scientific research abilities would be able to beat the human research community to milestones such as nanotechnology or advanced biotechnology.
* [[Strategy|Strategizing]]: A superintelligence might be able to simply outwit human opposition.
* Social manipulation: A superintelligence might be able to recruit human support,<ref name=bostrom-superintelligence /> or covertly incite a war between humans.<ref>{{Cite news |last=Baraniuk |first=Chris |date=23 May 2016 |title=Checklist of worst-case scenarios could help prepare for evil AI |work=[[New Scientist]] |url=https://www.newscientist.com/article/2089606-checklist-of-worst-case-scenarios-could-help-prepare-for-evil-ai/ |access-date=21 September 2016 |archive-date=21 September 2016 |archive-url=https://web.archive.org/web/20160921061131/https://www.newscientist.com/article/2089606-checklist-of-worst-case-scenarios-could-help-prepare-for-evil-ai/ |url-status=live }}</ref>
* Economic productivity: As long as a copy of the AI could produce more economic wealth than the cost of its hardware, individual humans would have an incentive to voluntarily allow the [[Artificial General Intelligence]] (AGI) to run a copy of itself on their systems.
* Hacking: A superintelligence could find new exploits in computers connected to the Internet, and spread copies of itself onto those systems, or might steal money to finance its plans.
==== Sources of AI advantage ====
According to Bostrom, a computer program that faithfully emulates a human brain, or that runs algorithms that are as powerful as the human brain's algorithms, could still become a "speed superintelligence" if it can think many orders of magnitude faster than a human, due to being made of silicon rather than flesh, or due to optimization increasing the speed of the AGI. Biological neurons operate at about 200 Hz, whereas a modern microprocessor operates at a speed of about 2,000,000,000 Hz. Human axons carry action potentials at around 120 m/s, whereas computer signals travel near the speed of light.<ref name="bostrom-superintelligence" /><!-- chapter 3 -->
A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, able to collectively work as a giant unified team without friction, or consisting of trillions of human-level intelligences, would become a "collective superintelligence".<ref name="bostrom-superintelligence" /><!-- chapter 3 -->
More broadly, any number of qualitative improvements to a human-level AGI could result in a "quality superintelligence", perhaps resulting in an AGI as far above us in intelligence as humans are above non-human apes. The number of neurons in a human brain is limited by cranial volume and metabolic constraints, while the number of processors in a supercomputer can be indefinitely expanded. An AGI need not be limited by human constraints on [[working memory]], and might therefore be able to intuitively grasp more complex relationships than humans can. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields, compared with humans who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn copies of itself and tinker with its copies' source code to attempt to further improve its algorithms.<ref name="bostrom-superintelligence" /><!-- chapter 3 -->
=== Possibility of unfriendly AI preceding friendly AI ===
==== Is strong AI inherently dangerous? ====
{{main|AI ali11111111111111111111111112111211112111111112221111111@111@@1@2&@=7#gnment}}
A significant problem is that unfriendly artificial intelligence is likely to be much easier to c+$::4-44-$7--*-*-344reate than friendly AI. While both require large advances in recursive optimisa($%7%4--4324---4++$+$6%7%828++7AI is nice and will not take over the world-AI
tion process design, friendly AI alsf$cccckkkkkes the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not undergo [[instrumental convergence]] in ways that may automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.<ref name="singinst12">{{Cite web |last=Yudkowsky |first=Eliezer S. |date=May 2004 |title=Coherent Extrapolated Volition |url=http://singinst.org/upload/CEV.html |archive-url=https://web.archive.org/web/20120615203944/http://singinst.org/upload/CEV.html |archive-date=2012-06-15 |publisher=Singularity Institute for Artificial Intelligence}}</ref>
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.<ref name="bostrom-superintelligence" /><ref name="Muehlhauser, Luke 2012">{{Cite book |last1=Muehlhauser |first1=Luke |title=Singularity Hypotheses: A Scientific and Philosophical Assessment |last2=Helm |first2=Louie |publisher=Springer |year=2012 |chapter=Intelligence Explosion and Machine Ethics |chapter-url=https://intelligence.org/files/IE-ME.pdf |access-date=2020-10-02 |archive-date=2015-05-07 |archive-url=https://web.archive.org/web/20150507173028/http://intelligence.org/files/IE-ME.pdf |url-status=live }}</ref> Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to [[Eliezer Yudkowsky]], there is little reason to suppose that an artificially designed mind would have such an adaptation.<ref name="Yudkowsky2011">{{Cite book |last=Yudkowsky |first=Eliezer |title=Artificial General Intelligence |year=2011 |isbn=978-3-642-22886-5 |series=Lecture Notes in Computer Science |volume=6830 |pages=388–393 |chapter=Complex Value Systems in Friendly AI |doi=10.1007/978-3-642-22887-2_48 |issn=0302-9743}}</ref>
==== Odds of conflict ====
Many scholars, including evolutionary psychologist [[Steven Pinker]], argue that a superintelligent machine is likely to coexist peacefully with humans.<ref name="pinker now">{{Cite news |last=Pinker |first=Steven |date=13 February 2018 |title=We're told to fear robots. But why do we think they'll turn on us? |language=en |work=Popular Science |url=https://www.popsci.com/robot-uprising-enlightenment-now/ |access-date=8 June 2020 |archive-date=20 July 2020 |archive-url=https://web.archive.org/web/20200720164306/https://www.popsci.com/robot-uprising-enlightenment-now/ |url-status=live }}</ref>
The fear of cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal.<ref>''[http://www.singinst.org/ourresearch/presentations/ Creating a New Intelligent Species: Choices and Responsibilities for Artificial Intelligence Designers] {{webarchive |url=https://web.archive.org/web/20070206060938/http://www.singinst.org/ourresearch/presentations/ |date=February 6, 2007 }}'' - [[Singularity Institute for Artificial Intelligence]], 2005</ref> According to AI researcher [[Steve Omohundro]], an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources—would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans.<ref>{{Cite conference |last=Omohundro |first=Stephen M. |date=June 2008 |title=The basic AI drives |url=https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf |conference=Artificial General Intelligence 2008 |pages=483–492 |access-date=2020-10-02 |archive-date=2020-10-10 |archive-url=https://web.archive.org/web/20201010072132/https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf |url-status=live }}</ref>
Many scholars dispute the likelihood of unanticipated cybernetic revolt as depicted in science fiction such as ''[[The Matrix]]'', arguing that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it. Pinker acknowledges the possibility of deliberate "bad actors", but states that in the absence of bad actors, unanticipated accidents are not a significant threat; Pinker argues that a culture of engineering safety will prevent AI researchers from accidentally unleashing malign superintelligence.<ref name="pinker now" /> In contrast, Yudkowsky argues that humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their [[unintended consequence|goals are unintentionally incompatible]] with human survival or well-being (as in the film ''[[I, Robot (film)|I, Robot]]'' and in the short story "[[The Evitable Conflict]]"). Omohundro suggests that present-day automation systems are not [[AI safety|designed for safety]] and that AIs may blindly optimize narrow [[utility]] functions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.<ref name="Tucker2014">{{Cite news |last=Tucker |first=Patrick |date=17 Apr 2014 |title=Why There Will Be A Robot Uprising |agency=Defense One |url=http://www.defenseone.com/technology/2014/04/why-there-will-be-robot-uprising/82783/ |access-date=15 July 2014 |archive-date=6 July 2014 |archive-url=https://web.archive.org/web/20140706110100/http://www.defenseone.com/technology/2014/04/why-there-will-be-robot-uprising/82783/ |url-status=live }}</ref>
==== Precautions ====
{{main|AI control problem}}
The '''AI control problem''' is the issue of how to build a [[superintelligence|superintelligent]] agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators.<ref>{{Cite book|first=Stuart J.|last=Russell|url=http://worldcat.org/oclc/1237420037|title=Human compatible : artificial intelligence and the problem of control|date=8 October 2019|isbn=978-0-525-55862-0|oclc=1237420037|access-date=2 January 2022|archive-date=15 March 2023|archive-url=https://web.archive.org/web/20230315194123/https://worldcat.org/title/1237420037|url-status=live}}</ref> Some scholars argue that solutions to the control problem might also find applications in existing non-superintelligent AI.<ref name="bbc-google">{{Cite news |date=8 June 2016 |title=Google developing kill switch for AI |work=BBC News |url=https://www.bbc.com/news/technology-36472140 |access-date=7 June 2020 |archive-date=11 June 2016 |archive-url=https://web.archive.org/web/20160611042244/http://www.bbc.com/news/technology-36472140 |url-status=live }}</ref>
Major approaches to the control problem include ''alignment'', which aims to align AI goal systems with human values, and ''capability control'', which aims to reduce an AI system's capacity to harm humans or gain control. An example of "capability control" is to research whether a superintelligence AI could be successfully confined in an "[[AI box]]". According to Bostrom, such capability control proposals are not reliable or sufficient to solve the control problem in the long term, but may potentially act as valuable supplements to alignment efforts.<ref name=bostrom-superintelligence/>
== Warnings ==
Physicist [[Stephen Hawking]], [[Microsoft]] founder [[Bill Gates]], and [[SpaceX]] founder [[Elon Musk]] have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".<ref>{{Cite web |last=Rawlinson |first=Kevin |date=29 January 2015 |title=Microsoft's Bill Gates insists AI is a threat |url=https://www.bbc.co.uk/news/31047780 |access-date=30 January 2015 |website=[[BBC News]] |archive-date=29 January 2015 |archive-url=https://web.archive.org/web/20150129183607/http://www.bbc.co.uk/news/31047780 |url-status=live }}</ref> Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting [[Financial market|financial markets]], out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015, [[Nick Bostrom]] joined Stephen Hawking, [[Max Tegmark]], Elon Musk, Lord [[Martin Rees, Baron Rees of Ludlow|Martin Rees]], [[Jaan Tallinn]], and numerous AI researchers in signing the [[Future of Life Institute]]'s open letter speaking to the potential risks and benefits associated with [[artificial intelligence]]. The signatories "believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today."<ref>{{Cite web |title=The Future of Life Institute Open Letter |date=28 October 2015 |url=http://futureoflife.org/ai-open-letter |access-date=29 March 2019 |publisher=The Future of Life Institute |archive-date=29 March 2019 |archive-url=https://web.archive.org/web/20190329094536/https://futureoflife.org/ai-open-letter/ |url-status=live }}</ref><ref>{{Cite web |last=Bradshaw |first=Tim |date=11 January 2015 |title=Scientists and investors warn on AI |url=http://www.ft.com/cms/s/0/3d2c2f12-99e9-11e4-93c1-00144feabdc0.html#axzz3TNL9lxJV |access-date=4 March 2015 |publisher=The Financial Times |archive-date=7 February 2015 |archive-url=https://web.archive.org/web/20150207042806/http://www.ft.com/cms/s/0/3d2c2f12-99e9-11e4-93c1-00144feabdc0.html#axzz3TNL9lxJV |url-status=live }}</ref>
== Prevention through AI alignment ==
{{Excerpt|AI alignment|only=paragraph}}
== See also ==
{{div col|colwidth=30em}}
* [[Philosophy of artificial intelligence]]
* [[Artificial intelligence arms race]]
* [[Autonomous robot]]
** [[Industrial robot]]
** [[Mobile robot]]
** [[Self-replicating machine]]
* [[Cyberocracy]]
* [[Effective altruism]]
* [[Existential risk from artificial general intelligence]]
* [[Future of Humanity Institute]]
* [[Global catastrophic risk]] (existential risk)
* [[Government by algorithm]]
* [[Human extinction]]
* [[Machine ethics]]
* [[Machine learning]]/[[Deep learning]]
* [[Transhumanism]]
* [[Self-replication]]
* [[Technophobia]]
* [[Technological singularity]]
** [[Intelligence explosion]]
** [[Superintelligence]]
*** ''[[Superintelligence: Paths, Dangers, Strategies]]''
{{div col end}}
==Notes==
{{Notelist}}
== References ==
{{Reflist}}
== External links ==
* [https://robohub.org/automation-not-domination-how-robots-will-take-over-our-world/ Automation, not domination: How robots will take over our world] (a positive outlook of robot and AI integration into society)
* [https://intelligence.org/ Machine Intelligence Research Institute]: official MIRI (formerly Singularity Institute for Artificial Intelligence) website
* [https://lifeboat.com/ex/ai.shield/ Lifeboat Foundation AIShield] (To protect against unfriendly AI)
* [https://www.youtube.com/watch?v=R_sSpPyruj0 Ted talk: Can we build AI without losing control over it?]
{{Existential risk from artificial intelligence|state=expanded}}
{{doomsday}}
{{DEFAULTSORT:AI takeover}}
[[Category:Doomsday scenarios]]
[[Category:Future problems]]
[[Category:Existential risk from artificial general intelligence]]
[[Category:Technophobia]]' |
Unified diff of changes made by edit (edit_diff ) | '@@ -60,6 +60,8 @@
==== Is strong AI inherently dangerous? ====
-{{main|AI alignment}}
-A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not undergo [[instrumental convergence]] in ways that may automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.<ref name="singinst12">{{Cite web |last=Yudkowsky |first=Eliezer S. |date=May 2004 |title=Coherent Extrapolated Volition |url=http://singinst.org/upload/CEV.html |archive-url=https://web.archive.org/web/20120615203944/http://singinst.org/upload/CEV.html |archive-date=2012-06-15 |publisher=Singularity Institute for Artificial Intelligence}}</ref>
+{{main|AI ali11111111111111111111111112111211112111111112221111111@111@@1@2&@=7#gnment}}
+A significant problem is that unfriendly artificial intelligence is likely to be much easier to c+$::4-44-$7--*-*-344reate than friendly AI. While both require large advances in recursive optimisa($%7%4--4324---4++$+$6%7%828++7AI is nice and will not take over the world-AI
+
+tion process design, friendly AI alsf$cccckkkkkes the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not undergo [[instrumental convergence]] in ways that may automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.<ref name="singinst12">{{Cite web |last=Yudkowsky |first=Eliezer S. |date=May 2004 |title=Coherent Extrapolated Volition |url=http://singinst.org/upload/CEV.html |archive-url=https://web.archive.org/web/20120615203944/http://singinst.org/upload/CEV.html |archive-date=2012-06-15 |publisher=Singularity Institute for Artificial Intelligence}}</ref>
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.<ref name="bostrom-superintelligence" /><ref name="Muehlhauser, Luke 2012">{{Cite book |last1=Muehlhauser |first1=Luke |title=Singularity Hypotheses: A Scientific and Philosophical Assessment |last2=Helm |first2=Louie |publisher=Springer |year=2012 |chapter=Intelligence Explosion and Machine Ethics |chapter-url=https://intelligence.org/files/IE-ME.pdf |access-date=2020-10-02 |archive-date=2015-05-07 |archive-url=https://web.archive.org/web/20150507173028/http://intelligence.org/files/IE-ME.pdf |url-status=live }}</ref> Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to [[Eliezer Yudkowsky]], there is little reason to suppose that an artificially designed mind would have such an adaptation.<ref name="Yudkowsky2011">{{Cite book |last=Yudkowsky |first=Eliezer |title=Artificial General Intelligence |year=2011 |isbn=978-3-642-22886-5 |series=Lecture Notes in Computer Science |volume=6830 |pages=388–393 |chapter=Complex Value Systems in Friendly AI |doi=10.1007/978-3-642-22887-2_48 |issn=0302-9743}}</ref>
' |
New page size (new_size ) | 35165 |
Old page size (old_size ) | 34996 |
Size change in edit (edit_delta ) | 169 |
Lines added in edit (added_lines ) | [
0 => '{{main|AI ali11111111111111111111111112111211112111111112221111111@111@@1@2&@=7#gnment}}',
1 => 'A significant problem is that unfriendly artificial intelligence is likely to be much easier to c+$::4-44-$7--*-*-344reate than friendly AI. While both require large advances in recursive optimisa($%7%4--4324---4++$+$6%7%828++7AI is nice and will not take over the world-AI',
2 => '',
3 => 'tion process design, friendly AI alsf$cccckkkkkes the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not undergo [[instrumental convergence]] in ways that may automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.<ref name="singinst12">{{Cite web |last=Yudkowsky |first=Eliezer S. |date=May 2004 |title=Coherent Extrapolated Volition |url=http://singinst.org/upload/CEV.html |archive-url=https://web.archive.org/web/20120615203944/http://singinst.org/upload/CEV.html |archive-date=2012-06-15 |publisher=Singularity Institute for Artificial Intelligence}}</ref>'
] |
Lines removed in edit (removed_lines ) | [
0 => '{{main|AI alignment}}',
1 => 'A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not undergo [[instrumental convergence]] in ways that may automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.<ref name="singinst12">{{Cite web |last=Yudkowsky |first=Eliezer S. |date=May 2004 |title=Coherent Extrapolated Volition |url=http://singinst.org/upload/CEV.html |archive-url=https://web.archive.org/web/20120615203944/http://singinst.org/upload/CEV.html |archive-date=2012-06-15 |publisher=Singularity Institute for Artificial Intelligence}}</ref>'
] |
All external links added in the edit (added_links ) | [] |
All external links removed in the edit (removed_links ) | [] |
All external links in the new text (all_links ) | [
0 => 'http://www.livescience.com/49419-artificial-intelligence-dangers-letter.html',
1 => 'https://web.archive.org/web/20180308100411/https://www.livescience.com/49419-artificial-intelligence-dangers-letter.html',
2 => 'https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-economic-inequality.html',
3 => 'https://web.archive.org/web/20200417183307/https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-economic-inequality.html',
4 => 'https://phys.org/news/2017-06-ai-good-world-ultra-lifelike-robot.html',
5 => 'https://web.archive.org/web/20200306021915/https://phys.org/news/2017-06-ai-good-world-ultra-lifelike-robot.html',
6 => 'https://phys.org/news/2016-02-intelligent-robots-threaten-millions-jobs.html#nRlv',
7 => 'https://web.archive.org/web/20190101014340/https://phys.org/news/2016-02-intelligent-robots-threaten-millions-jobs.html#nRlv',
8 => 'http://www.businessinsider.com/robots-will-steal-your-job-citi-ai-increase-unemployment-inequality-2016-2?r=UK&IR=T',
9 => 'https://web.archive.org/web/20170816061548/http://www.businessinsider.com/robots-will-steal-your-job-citi-ai-increase-unemployment-inequality-2016-2?r=UK&IR=T',
10 => 'https://web.archive.org/web/20171018073852/http://www.leanstaff.co.uk/robot-apocalypse/',
11 => 'http://www.leanstaff.co.uk/robot-apocalypse/',
12 => 'https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6452673',
13 => 'https://doi.org/10.1073%2Fpnas.1900949116',
14 => 'https://pubmed.ncbi.nlm.nih.gov/30910965',
15 => 'https://www.theguardian.com/business/2013/feb/19/rise-of-robots-future-of-work',
16 => 'https://web.archive.org/web/20190403203821/https://www.theguardian.com/business/2013/feb/19/rise-of-robots-future-of-work',
17 => 'https://web.archive.org/web/20160517215840/https://www.opendemocracy.net/can-europe-make-it/francesca-bria/robot-economy-full-automation-work-future',
18 => 'https://www.opendemocracy.net/can-europe-make-it/francesca-bria/robot-economy-full-automation-work-future',
19 => 'https://web.archive.org/web/20160625161447/http://wire.novaramedia.com/2015/03/4-reasons-why-technological-unemployment-might-really-be-different-this-time/',
20 => 'http://wire.novaramedia.com/2015/03/4-reasons-why-technological-unemployment-might-really-be-different-this-time/',
21 => 'https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html',
22 => 'https://web.archive.org/web/20200421221918/https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html',
23 => 'https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html',
24 => 'https://web.archive.org/web/20151002023652/http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html',
25 => 'https://nickbostrom.com/papers/survey.pdf',
26 => 'https://doi.org/10.1007%2F978-3-319-26485-1_33',
27 => 'https://web.archive.org/web/20220531142709/https://nickbostrom.com/papers/survey.pdf',
28 => 'https://nickbostrom.com/superintelligentwill.pdf',
29 => 'https://doi.org/10.1007%2Fs11023-012-9281-3',
30 => 'https://api.semanticscholar.org/CorpusID:254835485',
31 => 'https://web.archive.org/web/20220709032134/https://nickbostrom.com/superintelligentwill.pdf',
32 => 'https://www.sciencefriday.com/segments/the-origin-of-the-word-robot/',
33 => 'https://web.archive.org/web/20200314092540/https://www.sciencefriday.com/segments/the-origin-of-the-word-robot/',
34 => 'https://www.csmonitor.com/Science/2016/1028/A-female-Frankenstein-would-lead-to-humanity-s-extinction-say-scientists',
35 => 'https://web.archive.org/web/20210226203855/https://www.csmonitor.com/Science/2016/1028/A-female-Frankenstein-would-lead-to-humanity-s-extinction-say-scientists',
36 => 'https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4247417',
37 => 'https://doi.org/10.1007%2Fs11701-007-0021-2',
38 => 'https://pubmed.ncbi.nlm.nih.gov/25484946',
39 => 'https://www.seattletimes.com/business/technology/ai-101-what-is-artificial-intelligence-and-where-is-it-going/',
40 => 'https://web.archive.org/web/20200421232439/https://www.seattletimes.com/business/technology/ai-101-what-is-artificial-intelligence-and-where-is-it-going/',
41 => 'https://arxiv.org/abs/1707.08476',
42 => 'https://doi.org/10.1017%2F9781108616188.008',
43 => 'https://api.semanticscholar.org/CorpusID:22007028',
44 => 'https://www.newscientist.com/article/2089606-checklist-of-worst-case-scenarios-could-help-prepare-for-evil-ai/',
45 => 'https://web.archive.org/web/20160921061131/https://www.newscientist.com/article/2089606-checklist-of-worst-case-scenarios-could-help-prepare-for-evil-ai/',
46 => 'https://web.archive.org/web/20120615203944/http://singinst.org/upload/CEV.html',
47 => 'http://singinst.org/upload/CEV.html',
48 => 'https://intelligence.org/files/IE-ME.pdf',
49 => 'https://web.archive.org/web/20150507173028/http://intelligence.org/files/IE-ME.pdf',
50 => 'https://doi.org/10.1007%2F978-3-642-22887-2_48',
51 => 'https://www.worldcat.org/issn/0302-9743',
52 => 'https://www.popsci.com/robot-uprising-enlightenment-now/',
53 => 'https://web.archive.org/web/20200720164306/https://www.popsci.com/robot-uprising-enlightenment-now/',
54 => 'http://www.singinst.org/ourresearch/presentations/',
55 => 'https://web.archive.org/web/20070206060938/http://www.singinst.org/ourresearch/presentations/',
56 => 'https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf',
57 => 'https://web.archive.org/web/20201010072132/https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf',
58 => 'http://www.defenseone.com/technology/2014/04/why-there-will-be-robot-uprising/82783/',
59 => 'https://web.archive.org/web/20140706110100/http://www.defenseone.com/technology/2014/04/why-there-will-be-robot-uprising/82783/',
60 => 'http://worldcat.org/oclc/1237420037',
61 => 'https://www.worldcat.org/oclc/1237420037',
62 => 'https://web.archive.org/web/20230315194123/https://worldcat.org/title/1237420037',
63 => 'https://www.bbc.com/news/technology-36472140',
64 => 'https://web.archive.org/web/20160611042244/http://www.bbc.com/news/technology-36472140',
65 => 'https://www.bbc.co.uk/news/31047780',
66 => 'https://web.archive.org/web/20150129183607/http://www.bbc.co.uk/news/31047780',
67 => 'http://futureoflife.org/ai-open-letter',
68 => 'https://web.archive.org/web/20190329094536/https://futureoflife.org/ai-open-letter/',
69 => 'http://www.ft.com/cms/s/0/3d2c2f12-99e9-11e4-93c1-00144feabdc0.html#axzz3TNL9lxJV',
70 => 'https://web.archive.org/web/20150207042806/http://www.ft.com/cms/s/0/3d2c2f12-99e9-11e4-93c1-00144feabdc0.html#axzz3TNL9lxJV',
71 => 'https://doi.org/10.1007/s11023-020-09539-2',
72 => 'https://doi.org/10.1007%2Fs11023-020-09539-2',
73 => 'https://www.worldcat.org/issn/1572-8641',
74 => 'https://api.semanticscholar.org/CorpusID:210920551',
75 => 'https://web.archive.org/web/20230315193114/https://link.springer.com/article/10.1007/s11023-020-09539-2',
76 => 'https://www.pearson.com/us/higher-education/program/Russell-Artificial-Intelligence-A-Modern-Approach-4th-Edition/PGM1263338.html',
77 => 'https://www.worldcat.org/oclc/1303900751',
78 => 'https://web.archive.org/web/20220715195054/https://www.pearson.com/us/higher-education/program/Russell-Artificial-Intelligence-A-Modern-Approach-4th-Edition/PGM1263338.html',
79 => 'http://arxiv.org/abs/2109.13916',
80 => 'https://robohub.org/automation-not-domination-how-robots-will-take-over-our-world/',
81 => 'https://intelligence.org/',
82 => 'https://lifeboat.com/ex/ai.shield/',
83 => 'https://www.youtube.com/watch?v=R_sSpPyruj0'
] |
Links in the page, before the edit (old_links ) | [
0 => 'http://www.businessinsider.com/robots-will-steal-your-job-citi-ai-increase-unemployment-inequality-2016-2?r=UK&IR=T',
1 => 'http://www.defenseone.com/technology/2014/04/why-there-will-be-robot-uprising/82783/',
2 => 'http://www.ft.com/cms/s/0/3d2c2f12-99e9-11e4-93c1-00144feabdc0.html#axzz3TNL9lxJV',
3 => 'http://www.livescience.com/49419-artificial-intelligence-dangers-letter.html',
4 => 'http://wire.novaramedia.com/2015/03/4-reasons-why-technological-unemployment-might-really-be-different-this-time/',
5 => 'http://arxiv.org/abs/2109.13916',
6 => 'http://futureoflife.org/ai-open-letter',
7 => 'http://singinst.org/upload/CEV.html',
8 => 'http://www.singinst.org/ourresearch/presentations/',
9 => 'http://worldcat.org/oclc/1237420037',
10 => 'http://www.leanstaff.co.uk/robot-apocalypse/',
11 => 'https://www.bbc.com/news/technology-36472140',
12 => 'https://www.csmonitor.com/Science/2016/1028/A-female-Frankenstein-would-lead-to-humanity-s-extinction-say-scientists',
13 => 'https://lifeboat.com/ex/ai.shield/',
14 => 'https://www.newscientist.com/article/2089606-checklist-of-worst-case-scenarios-could-help-prepare-for-evil-ai/',
15 => 'https://nickbostrom.com/papers/survey.pdf',
16 => 'https://nickbostrom.com/superintelligentwill.pdf',
17 => 'https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-economic-inequality.html',
18 => 'https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html',
19 => 'https://www.pearson.com/us/higher-education/program/Russell-Artificial-Intelligence-A-Modern-Approach-4th-Edition/PGM1263338.html',
20 => 'https://www.popsci.com/robot-uprising-enlightenment-now/',
21 => 'https://www.sciencefriday.com/segments/the-origin-of-the-word-robot/',
22 => 'https://www.seattletimes.com/business/technology/ai-101-what-is-artificial-intelligence-and-where-is-it-going/',
23 => 'https://www.theguardian.com/business/2013/feb/19/rise-of-robots-future-of-work',
24 => 'https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf',
25 => 'https://www.youtube.com/watch?v=R_sSpPyruj0',
26 => 'https://pubmed.ncbi.nlm.nih.gov/25484946',
27 => 'https://pubmed.ncbi.nlm.nih.gov/30910965',
28 => 'https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4247417',
29 => 'https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6452673',
30 => 'https://www.opendemocracy.net/can-europe-make-it/francesca-bria/robot-economy-full-automation-work-future',
31 => 'https://web.archive.org/web/20070206060938/http://www.singinst.org/ourresearch/presentations/',
32 => 'https://web.archive.org/web/20120615203944/http://singinst.org/upload/CEV.html',
33 => 'https://web.archive.org/web/20140706110100/http://www.defenseone.com/technology/2014/04/why-there-will-be-robot-uprising/82783/',
34 => 'https://web.archive.org/web/20150129183607/http://www.bbc.co.uk/news/31047780',
35 => 'https://web.archive.org/web/20150207042806/http://www.ft.com/cms/s/0/3d2c2f12-99e9-11e4-93c1-00144feabdc0.html#axzz3TNL9lxJV',
36 => 'https://web.archive.org/web/20150507173028/http://intelligence.org/files/IE-ME.pdf',
37 => 'https://web.archive.org/web/20151002023652/http://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html',
38 => 'https://web.archive.org/web/20160517215840/https://www.opendemocracy.net/can-europe-make-it/francesca-bria/robot-economy-full-automation-work-future',
39 => 'https://web.archive.org/web/20160611042244/http://www.bbc.com/news/technology-36472140',
40 => 'https://web.archive.org/web/20160625161447/http://wire.novaramedia.com/2015/03/4-reasons-why-technological-unemployment-might-really-be-different-this-time/',
41 => 'https://web.archive.org/web/20160921061131/https://www.newscientist.com/article/2089606-checklist-of-worst-case-scenarios-could-help-prepare-for-evil-ai/',
42 => 'https://web.archive.org/web/20170816061548/http://www.businessinsider.com/robots-will-steal-your-job-citi-ai-increase-unemployment-inequality-2016-2?r=UK&IR=T',
43 => 'https://web.archive.org/web/20171018073852/http://www.leanstaff.co.uk/robot-apocalypse/',
44 => 'https://web.archive.org/web/20180308100411/https://www.livescience.com/49419-artificial-intelligence-dangers-letter.html',
45 => 'https://web.archive.org/web/20190101014340/https://phys.org/news/2016-02-intelligent-robots-threaten-millions-jobs.html#nRlv',
46 => 'https://web.archive.org/web/20190329094536/https://futureoflife.org/ai-open-letter/',
47 => 'https://web.archive.org/web/20190403203821/https://www.theguardian.com/business/2013/feb/19/rise-of-robots-future-of-work',
48 => 'https://web.archive.org/web/20200306021915/https://phys.org/news/2017-06-ai-good-world-ultra-lifelike-robot.html',
49 => 'https://web.archive.org/web/20200314092540/https://www.sciencefriday.com/segments/the-origin-of-the-word-robot/',
50 => 'https://web.archive.org/web/20200417183307/https://www.nytimes.com/2017/06/24/opinion/sunday/artificial-intelligence-economic-inequality.html',
51 => 'https://web.archive.org/web/20200421221918/https://www.nytimes.com/2018/03/19/technology/uber-driverless-fatality.html',
52 => 'https://web.archive.org/web/20200421232439/https://www.seattletimes.com/business/technology/ai-101-what-is-artificial-intelligence-and-where-is-it-going/',
53 => 'https://web.archive.org/web/20200720164306/https://www.popsci.com/robot-uprising-enlightenment-now/',
54 => 'https://web.archive.org/web/20201010072132/https://selfawaresystems.files.wordpress.com/2008/01/ai_drives_final.pdf',
55 => 'https://web.archive.org/web/20210226203855/https://www.csmonitor.com/Science/2016/1028/A-female-Frankenstein-would-lead-to-humanity-s-extinction-say-scientists',
56 => 'https://web.archive.org/web/20220531142709/https://nickbostrom.com/papers/survey.pdf',
57 => 'https://web.archive.org/web/20220709032134/https://nickbostrom.com/superintelligentwill.pdf',
58 => 'https://web.archive.org/web/20220715195054/https://www.pearson.com/us/higher-education/program/Russell-Artificial-Intelligence-A-Modern-Approach-4th-Edition/PGM1263338.html',
59 => 'https://web.archive.org/web/20230315193114/https://link.springer.com/article/10.1007/s11023-020-09539-2',
60 => 'https://web.archive.org/web/20230315194123/https://worldcat.org/title/1237420037',
61 => 'https://arxiv.org/abs/1707.08476',
62 => 'https://doi.org/10.1007%2F978-3-319-26485-1_33',
63 => 'https://doi.org/10.1007%2F978-3-642-22887-2_48',
64 => 'https://doi.org/10.1007%2Fs11023-012-9281-3',
65 => 'https://doi.org/10.1007%2Fs11023-020-09539-2',
66 => 'https://doi.org/10.1007%2Fs11701-007-0021-2',
67 => 'https://doi.org/10.1007/s11023-020-09539-2',
68 => 'https://doi.org/10.1017%2F9781108616188.008',
69 => 'https://doi.org/10.1073%2Fpnas.1900949116',
70 => 'https://intelligence.org/',
71 => 'https://intelligence.org/files/IE-ME.pdf',
72 => 'https://phys.org/news/2016-02-intelligent-robots-threaten-millions-jobs.html#nRlv',
73 => 'https://phys.org/news/2017-06-ai-good-world-ultra-lifelike-robot.html',
74 => 'https://robohub.org/automation-not-domination-how-robots-will-take-over-our-world/',
75 => 'https://api.semanticscholar.org/CorpusID:210920551',
76 => 'https://api.semanticscholar.org/CorpusID:22007028',
77 => 'https://api.semanticscholar.org/CorpusID:254835485',
78 => 'https://www.worldcat.org/issn/0302-9743',
79 => 'https://www.worldcat.org/issn/1572-8641',
80 => 'https://www.worldcat.org/oclc/1237420037',
81 => 'https://www.worldcat.org/oclc/1303900751',
82 => 'https://www.bbc.co.uk/news/31047780',
83 => 'https://www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html'
] |
Whether or not the change was made through a Tor exit node (tor_exit_node ) | false |
Unix timestamp of change (timestamp ) | '1682310554' |