Jump to content

Ethics of artificial intelligence: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
m added more about youth activism
Tags: Reverted Visual edit
History: added reference needed
 
(337 intermediate revisions by more than 100 users not shown)
Line 1: Line 1:
{{short description|Ethical issues specific to AI}}
{{Short description|Challenges related to the responsible development and use of AI}}
{{cs1 config|name-list-style=vanc}}
{{Artificial intelligence}}
{{Artificial intelligence|Philosophy}}
The '''ethics of artificial intelligence''' is the branch of the [[ethics of technology]] specific to [[Artificial intelligence|artificially intelligent]] systems.<ref>{{Cite web|last=Müller|first=Vincent C.|date=30 April 2020|title=Ethics of Artificial Intelligence and Robotics|url=https://plato.stanford.edu/entries/ethics-ai/|access-date=26 September 2020|website=Stanford Encyclopedia of Philosophy|archive-date=10 October 2020|archive-url=https://web.archive.org/web/20201010174108/https://plato.stanford.edu/entries/ethics-ai/|url-status=live}}</ref> It is sometimes divided into a concern with the moral behavior of ''humans'' as they design, make, use and treat artificially intelligent systems, and a concern with the behavior of ''machines,'' in [[machine ethics]]. It also includes the issue of a possible [[Technological singularity|singularity]] due to [[superintelligence|superintelligent AI]].
The [[ethics]] of [[artificial intelligence]] covers a broad range of topics within the field that are considered to have particular ethical stakes.<ref name="Muller-2020">{{Cite web |last=Müller |first=Vincent C. |date=April 30, 2020 |title=Ethics of Artificial Intelligence and Robotics |url=https://plato.stanford.edu/entries/ethics-ai/ |url-status=live |archive-url=https://web.archive.org/web/20201010174108/https://plato.stanford.edu/entries/ethics-ai/ |archive-date=10 October 2020 |access-date= |website=Stanford Encyclopedia of Philosophy}}</ref> This includes [[algorithmic bias]]es, [[Fairness (machine learning)|fairness]], [[automated decision-making]], [[accountability]], [[privacy]], and [[Regulation of artificial intelligence|regulation]].
It also covers various emerging or potential future challenges such as [[machine ethics]] (how to make machines that behave ethically), [[Lethal autonomous weapon|lethal autonomous weapon systems]], [[Artificial intelligence arms race|arms race]] dynamics, [[AI safety]] and [[AI alignment|alignment]], [[technological unemployment]], AI-enabled [[misinformation]], how to treat certain AI systems if they have a [[moral status]] (AI welfare and rights), [[artificial superintelligence]] and [[Existential risk from artificial general intelligence|existential risks]].<ref name="Muller-2020" />


Some application areas may also have particularly important ethical implications, like [[Artificial intelligence in healthcare|healthcare]], education, criminal justice, or the military.
== Ethics fields' approaches ==


===Robot ethics===
== Machine ethics ==
{{Main|Machine ethics|AI alignment}}

Machine ethics (or machine morality) is the field of research concerned with designing [[Moral agency#Artificial Moral Agents|Artificial Moral Agents]] (AMAs), robots or artificially intelligent computers that behave morally or as though moral.<ref name="Andersonweb">{{cite web|last=Anderson|title=Machine Ethics|url=http://uhaweb.hartford.edu/anderson/MachineEthics.html|url-status=live|archive-url=https://web.archive.org/web/20110928233656/https://uhaweb.hartford.edu/anderson/MachineEthics.html|archive-date=28 September 2011|access-date=27 June 2011}}</ref><ref name="Anderson2011">{{Cite book|title=Machine Ethics|date=July 2011|publisher=[[Cambridge University Press]]|isbn=978-0-521-11235-2|editor1-last=Anderson|editor1-first=Michael|editor2-last=Anderson|editor2-first=Susan Leigh}}</ref><ref name="Anderson2006">{{cite journal|last1=Anderson|first1=M.|last2=Anderson|first2=S.L.|date=July 2006|title=Guest Editors' Introduction: Machine Ethics|journal=IEEE Intelligent Systems|volume=21|issue=4|pages=10–11|doi=10.1109/mis.2006.70|s2cid=9570832}}</ref><ref name="Anderson2007">{{cite journal|last1=Anderson|first1=Michael|last2=Anderson|first2=Susan Leigh|date=15 December 2007|title=Machine Ethics: Creating an Ethical Intelligent Agent|journal=AI Magazine|volume=28|issue=4|page=15|doi=10.1609/aimag.v28i4.2065|s2cid=17033332 }}</ref> To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of [[Agency (philosophy)|agency]], [[Rational agent|rational agency]], [[moral agency]], and artificial agency, which are related to the concept of AMAs.<ref>{{cite journal|last1=Boyles|first1=Robert James M.|date=2017|title=Philosophical Signposts for Artificial Moral Agent Frameworks|url=https://philarchive.org/rec/BOYPSF|journal=Suri|volume=6|issue=2|pages=92–109}}</ref>

There are discussions on creating tests to see if an AI is capable of making [[ethical decision]]s. [[Alan Winfield]] concludes that the [[Turing test]] is flawed and the requirement for an AI to pass the test is too low.<ref name="Winfield-2019">{{Cite journal|last1=Winfield|first1=A. F.|last2=Michael|first2=K.|last3=Pitt|first3=J.|last4=Evers|first4=V.|date=March 2019|title=Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems [Scanning the Issue]|journal=Proceedings of the IEEE|volume=107|issue=3|pages=509–517|doi=10.1109/JPROC.2019.2900622|s2cid=77393713|issn=1558-2256|doi-access=free}}</ref> A proposed alternative test is one called the Ethical Turing Test, which would improve on the current test by having multiple judges decide if the AI's decision is ethical or unethical.<ref name="Winfield-2019" /> [[Neuromorphic engineering|Neuromorphic]] AI could be one way to create morally capable robots, as it aims to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.<ref>{{cite news|last1=Al-Rodhan|first1=Nayef|date=7 December 2015|title=The Moral Code|url=https://www.foreignaffairs.com/articles/2015-08-12/moral-code|url-status=live|access-date=2017-03-04|archive-url=https://web.archive.org/web/20170305044025/https://www.foreignaffairs.com/articles/2015-08-12/moral-code|archive-date=2017-03-05}}</ref> Similarly, [[whole-brain emulation]] (scanning a brain and simulating it on digital hardware) could also in principle lead to human-like robots, thus capable of moral actions.<ref>{{Cite web |last=Sauer |first=Megan |date=2022-04-08 |title=Elon Musk says humans could eventually download their brains into robots — and Grimes thinks Jeff Bezos would do it |url=https://www.cnbc.com/2022/04/08/elon-musk-humans-could-eventually-download-their-brains-into-robots.html |access-date=2024-04-07 |website=CNBC |language=en |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925013113/https://www.cnbc.com/2022/04/08/elon-musk-humans-could-eventually-download-their-brains-into-robots.html |url-status=live }}</ref> And [[large language model]]s are capable of approximating human moral judgments.<ref>{{Cite web |last=Anadiotis |first=George |date=April 4, 2022 |title=Massaging AI language models for fun, profit and ethics |url=https://www.zdnet.com/article/massaging-ai-language-models-for-fun-profit-and-ethics/ |access-date=2024-04-07 |website=ZDNET |language=en |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925013214/https://www.zdnet.com/article/massaging-ai-language-models-for-fun-profit-and-ethics/ |url-status=live }}</ref> Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit – or if they end up developing human 'weaknesses' as well: selfishness, pro-survival attitudes, inconsistency, scale insensitivity, etc.

In ''Moral Machines: Teaching Robots Right from Wrong'',<ref name="Wallach2008">{{Cite book|last1=Wallach|first1=Wendell|title=Moral Machines: Teaching Robots Right from Wrong|last2=Allen|first2=Colin|date=November 2008|publisher=[[Oxford University Press]]|isbn=978-0-19-537404-9|location=USA <!--url=http://www.ttu.ee/public/m/mart-murdvee/Techno-Psy/Wallach_Allen_2008_Moral_Machines_-_Teaching_Robots_Right_from_Wrong.pdf-->}}</ref> [[Wendell Wallach]] and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern [[Normative ethics|normative theory]] and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific [[List of machine learning algorithms|learning algorithms]] to use in machines. For simple decisions, [[Nick Bostrom]] and [[Eliezer Yudkowsky]] have argued that [[decision tree]]s (such as [[ID3 algorithm|ID3]]) are more transparent than [[Artificial neural network|neural networks]] and [[genetic algorithm]]s,<ref>{{cite web|last1=Bostrom|first1=Nick|author-link1=Nick Bostrom|last2=Yudkowsky|first2=Eliezer|author-link2=Eliezer Yudkowsky|year=2011|title=The Ethics of Artificial Intelligence|url=http://www.nickbostrom.com/ethics/artificial-intelligence.pdf|url-status=live|archive-url=https://web.archive.org/web/20160304015020/http://www.nickbostrom.com/ethics/artificial-intelligence.pdf|archive-date=2016-03-04|access-date=2011-06-22|work=Cambridge Handbook of Artificial Intelligence|publisher=[[Cambridge Press]]}}</ref> while Chris Santos-Lang argued in favor of [[machine learning]] on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "[[Hacker culture|hackers]]".<ref name="SantosLang2002">{{cite web|last=Santos-Lang|first=Chris|year=2002|title=Ethics for Artificial Intelligences|url=http://santoslang.wordpress.com/article/ethics-for-artificial-intelligences-3iue30fi4gfq9-1|url-status=live|archive-url=https://web.archive.org/web/20141225093359/http://santoslang.wordpress.com/article/ethics-for-artificial-intelligences-3iue30fi4gfq9-1/|archive-date=2014-12-25|access-date=2015-01-04}}</ref>

=== Robot ethics ===
{{Main|Robot ethics}}
{{Main|Robot ethics}}


The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots.<ref name=Veruggio2002>{{cite journal | title = The Roboethics Roadmap | page = 2 | author = Veruggio, Gianmarco | publisher = Scuola di Robotica | year = 2011 |citeseerx = 10.1.1.466.2810|journal=EURON Roboethics Atelier}}</ref> Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software.<ref>{{Citation|last=Müller|first=Vincent C.|title=Ethics of Artificial Intelligence and Robotics|date=2020|url=https://plato.stanford.edu/archives/win2020/entries/ethics-ai/|encyclopedia=The Stanford Encyclopedia of Philosophy|editor-last=Zalta|editor-first=Edward N.|edition=Winter 2020|publisher=Metaphysics Research Lab, Stanford University|access-date=2021-03-18}}</ref> Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers how machines may be used to harm or benefit humans, their impact on individual autonomy, and their effects on social justice.
The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots.<ref name="Veruggio2002">{{cite journal |author=Veruggio, Gianmarco |year=2011 |title=The Roboethics Roadmap |journal=EURON Roboethics Atelier |publisher=Scuola di Robotica |page=2 |citeseerx=10.1.1.466.2810}}</ref> Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software.<ref>{{Citation |last=Müller |first=Vincent C. |title=Ethics of Artificial Intelligence and Robotics |date=2020 |url=https://plato.stanford.edu/archives/win2020/entries/ethics-ai/ |encyclopedia=The Stanford Encyclopedia of Philosophy |editor-last=Zalta |editor-first=Edward N. |access-date=2021-03-18 |archive-url=https://web.archive.org/web/20210412140022/https://plato.stanford.edu/archives/win2020/entries/ethics-ai/ |url-status=live |edition=Winter 2020 |publisher=Metaphysics Research Lab, Stanford University |archive-date=2021-04-12}}</ref> Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers how machines may be used to harm or benefit humans, their impact on individual autonomy, and their effects on social justice.


===Machine ethics===
=== Ethical principles ===
In the review of 84<ref name="Jobin-2020">{{cite journal|last1=Jobin|first1=Anna|last2=Ienca|first2=Marcello|last3=Vayena|first3=Effy|author-link3=Effy Vayena|date=2 September 2020|title=The global landscape of AI ethics guidelines|journal=Nature|volume=1|issue=9|pages=389–399|doi=10.1038/s42256-019-0088-2|arxiv=1906.11668|s2cid=201827642}}</ref> ethics guidelines for AI, 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy, [[Beneficence (ethics)|beneficence]], freedom and autonomy, trust, sustainability, dignity, and [[solidarity]].<ref name="Jobin-2020"/>
{{Main|Machine ethics}}
Machine ethics (or machine morality) is the field of research concerned with designing [[Moral agency#Artificial moral agents|Artificial Moral Agents]] (AMAs), robots or artificially intelligent computers that behave morally or as though moral.<ref name="Andersonweb">{{cite web|last=Anderson|title=Machine Ethics|url=http://uhaweb.hartford.edu/anderson/MachineEthics.html|url-status=live|archive-url=https://web.archive.org/web/20110928233656/http://uhaweb.hartford.edu/anderson/MachineEthics.html|archive-date=28 September 2011|access-date=27 June 2011}}</ref><ref name="Anderson2011">{{Cite book|title=Machine Ethics|date=July 2011|publisher=[[Cambridge University Press]]|isbn=978-0-521-11235-2|editor1-last=Anderson|editor1-first=Michael|editor2-last=Anderson|editor2-first=Susan Leigh}}</ref><ref name="Anderson2006">{{cite journal|last1=Anderson|first1=M.|last2=Anderson|first2=S.L.|date=July 2006|title=Guest Editors' Introduction: Machine Ethics|journal=IEEE Intelligent Systems|volume=21|issue=4|pages=10–11|doi=10.1109/mis.2006.70|s2cid=9570832}}</ref><ref name="Anderson2007">{{cite journal|last1=Anderson|first1=Michael|last2=Anderson|first2=Susan Leigh|date=15 December 2007|title=Machine Ethics: Creating an Ethical Intelligent Agent|journal=AI Magazine|volume=28|issue=4|page=15|doi=10.1609/aimag.v28i4.2065}}</ref> To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of [[Agency (philosophy)|agency]], [[Rational agent|rational agency]], [[moral agency]], and artificial agency, which are related to the concept of AMAs.<ref>{{cite journal|last1=Boyles|first1=Robert James M.|date=2017|title=Philosophical Signposts for Artificial Moral Agent Frameworks|url=https://philarchive.org/rec/BOYPSF|journal=Suri|volume=6|issue=2|pages=92–109}}</ref>


[[Luciano Floridi]] and Josh Cowls created an ethical framework of AI principles set by four principles of [[bioethics]] ([[Beneficence (ethics)|beneficence]], [[non-maleficence]], [[autonomy]] and [[justice]]) and an additional AI enabling principle – explicability.<ref>{{cite journal|last1=Floridi|first1=Luciano|last2=Cowls|first2=Josh|date=2 July 2019|title=A Unified Framework of Five Principles for AI in Society|journal=Harvard Data Science Review|volume=1|doi=10.1162/99608f92.8cd550d1|s2cid=198775713|doi-access=free}}</ref>
[[Isaac Asimov]] considered the issue in the 1950s in his ''[[I, Robot]]''. At the insistence of his editor [[John W. Campbell Jr.]], he proposed the [[Three Laws of Robotics]] to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior. His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.<ref name="Asimov2008">{{Cite book|last=Asimov|first=Isaac|title=I, Robot|title-link=I, Robot|publisher=Bantam|year=2008|isbn=978-0-553-38256-3|location=New York}}</ref> More recently, academics and many governments have challenged the idea that AI can itself be held accountable.<ref name="lacuna">{{cite journal|last1=Bryson|first1=Joanna|last2=Diamantis|first2=Mihailis|last3=Grant|first3=Thomas|date=September 2017|title=Of, for, and by the people: the legal lacuna of synthetic persons|journal=Artificial Intelligence and Law|volume=25|issue=3|pages=273–291|doi=10.1007/s10506-017-9214-9|ref=lacuna|doi-access=free}}</ref> A panel convened by the [[United Kingdom]] in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.<ref name="principles">{{cite web|date=September 2010|title=Principles of robotics|url=https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/|url-status=live|archive-url=https://web.archive.org/web/20180401004346/https://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/|archive-date=1 April 2018|access-date=10 January 2019|publisher=UK's EPSRC|ref=principles}}</ref>


== Current challenges ==
In 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of [[Lausanne]] in [[Switzerland]], robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.<ref>[http://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other Evolving Robots Learn To Lie To Each Other] {{Webarchive|url=https://web.archive.org/web/20090828105728/http://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other|date=2009-08-28}}, Popular Science, August 18, 2009</ref>


=== Algorithmic biases ===
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.<ref name="Call for debate on killer robots" /> The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.<ref>[http://www.dailytech.com/New%20Navyfunded%20Report%20Warns%20of%20War%20Robots%20Going%20Terminator/article14298.htm Science New Navy-funded Report Warns of War Robots Going "Terminator"] {{Webarchive|url=https://web.archive.org/web/20090728101106/http://www.dailytech.com/New%20Navyfunded%20Report%20Warns%20of%20War%20Robots%20Going%20Terminator/article14298.htm|date=2009-07-28}}, by Jason Mick (Blog), dailytech.com, February 17, 2009.</ref><ref name="engadget.com" /> The President of the [[Association for the Advancement of Artificial Intelligence]] has commissioned a study to look at this issue.<ref>[http://research.microsoft.com/en-us/um/people/horvitz/AAAI_Presidential_Panel_2008-2009.htm AAAI Presidential Panel on Long-Term AI Futures 2008–2009 Study] {{Webarchive|url=https://web.archive.org/web/20090828214741/http://research.microsoft.com/en-us/um/people/horvitz/AAAI_Presidential_Panel_2008-2009.htm|date=2009-08-28}}, Association for the Advancement of Artificial Intelligence, Accessed 7/26/09.</ref> They point to programs like the [[Language Acquisition Device (computer)|Language Acquisition Device]] which can emulate human interaction.
{{Main|Algorithmic bias}}
[[File:Kamala Harris speaks about racial bias in artificial intelligence - 2020-04-23.ogg|thumb|[[Kamala Harris]] speaking about racial bias in artificial intelligence in 2020]]AI has become increasingly inherent in facial and [[Speech recognition|voice recognition]] systems. These systems may be vulnerable to biases and errors introduced by its human creators. Notably, the data used to train them can have biases.<ref>{{Cite web |last=Gabriel |first=Iason |date=2018-03-14 |title=The case for fairer algorithms – Iason Gabriel |url=https://medium.com/@Ethics_Society/the-case-for-fairer-algorithms-c008a12126f8 |url-status=live |archive-url=https://web.archive.org/web/20190722080401/https://medium.com/@Ethics_Society/the-case-for-fairer-algorithms-c008a12126f8 |archive-date=2019-07-22 |access-date=2019-07-22 |website=Medium}}</ref><ref>{{Cite web |date=10 December 2016 |title=5 unexpected sources of bias in artificial intelligence |url=https://techcrunch.com/2016/12/10/5-unexpected-sources-of-bias-in-artificial-intelligence/ |url-status=live |archive-url=https://web.archive.org/web/20210318060659/https://techcrunch.com/2016/12/10/5-unexpected-sources-of-bias-in-artificial-intelligence/ |archive-date=2021-03-18 |access-date=2019-07-22 |website=TechCrunch}}</ref><ref>{{Cite web |last=Knight |first=Will |title=Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead |url=https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/ |url-status=live |archive-url=https://web.archive.org/web/20190704224752/https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/ |archive-date=2019-07-04 |access-date=2019-07-22 |website=MIT Technology Review}}</ref><ref>{{Cite web |last=Villasenor |first=John |date=2019-01-03 |title=Artificial intelligence and bias: Four key challenges |url=https://www.brookings.edu/blog/techtank/2019/01/03/artificial-intelligence-and-bias-four-key-challenges/ |url-status=live |archive-url=https://web.archive.org/web/20190722080355/https://www.brookings.edu/blog/techtank/2019/01/03/artificial-intelligence-and-bias-four-key-challenges/ |archive-date=2019-07-22 |access-date=2019-07-22 |website=Brookings}}</ref> For instance, [[Facial recognition system|facial recognition]] algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender;<ref>{{cite news |last1=Lohr |first1=Steve |date=9 February 2018 |title=Facial Recognition Is Accurate, if You're a White Guy |url=https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html |url-status=live |archive-url=https://web.archive.org/web/20190109131036/https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html |archive-date=9 January 2019 |access-date=29 May 2019 |work=The New York Times}}</ref> these AI systems were able to detect the gender of white men more accurately than the gender of men of darker skin. Further, a 2020 study that reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's.<ref>{{cite journal |last1=Koenecke |first1=Allison |author-link=Allison Koenecke |last2=Nam |first2=Andrew |last3=Lake |first3=Emily |last4=Nudell |first4=Joe |last5=Quartey |first5=Minnie |last6=Mengesha |first6=Zion |last7=Toups |first7=Connor |last8=Rickford |first8=John R. |last9=Jurafsky |first9=Dan |last10=Goel |first10=Sharad |date=7 April 2020 |title=Racial disparities in automated speech recognition |journal=Proceedings of the National Academy of Sciences |volume=117 |issue=14 |pages=7684–7689 |bibcode=2020PNAS..117.7684K |doi=10.1073/pnas.1915768117 |pmc=7149386 |pmid=32205437 |doi-access=free}}</ref>


The most predominant view on how bias is introduced into AI systems is that it is embedded within the historical data used to train the system.<ref>{{Cite journal |last1=Ntoutsi |first1=Eirini |last2=Fafalios |first2=Pavlos |last3=Gadiraju |first3=Ujwal |last4=Iosifidis |first4=Vasileios |last5=Nejdl |first5=Wolfgang |last6=Vidal |first6=Maria-Esther |last7=Ruggieri |first7=Salvatore |last8=Turini |first8=Franco |last9=Papadopoulos |first9=Symeon |last10=Krasanakis |first10=Emmanouil |last11=Kompatsiaris |first11=Ioannis |last12=Kinder-Kurlanda |first12=Katharina |last13=Wagner |first13=Claudia |last14=Karimi |first14=Fariba |last15=Fernandez |first15=Miriam |date=May 2020 |title=Bias in data-driven artificial intelligence systems—An introductory survey |url=https://wires.onlinelibrary.wiley.com/doi/10.1002/widm.1356 |url-status=live |journal=WIREs Data Mining and Knowledge Discovery |language=en |volume=10 |issue=3 |doi=10.1002/widm.1356 |issn=1942-4787 |archive-url=https://web.archive.org/web/20240925013154/https://wires.onlinelibrary.wiley.com/doi/10.1002/widm.1356 |archive-date=2024-09-25 |access-date=2023-12-14}}</ref> For instance, [[Amazon (company)|Amazon]] terminated their use of [[Artificial intelligence in hiring|AI hiring and recruitment]] because the algorithm favored male candidates over female ones. This was because Amazon's system was trained with data collected over a 10-year period that included mostly male candidates. The algorithms learned the biased pattern from the historical data, and generated predictions where these types of candidates were most likely to succeed in getting the job. Therefore, the recruitment decisions made by the AI system turned out to be biased against female and minority candidates.<ref>{{Cite news |date=2018-10-10 |title=Amazon scraps secret AI recruiting tool that showed bias against women |url=https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G |url-status=live |archive-url=https://web.archive.org/web/20190527181625/https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G |archive-date=2019-05-27 |access-date=2019-05-29 |work=Reuters}}</ref> Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias.<ref>{{cite journal |last1=Friedman |first1=Batya |last2=Nissenbaum |first2=Helen |date=July 1996 |title=Bias in computer systems |journal=ACM Transactions on Information Systems |volume=14 |issue=3 |pages=330–347 |doi=10.1145/230538.230561 |s2cid=207195759 |doi-access=free}}</ref> In [[natural language processing]], problems can arise from the [[text corpus]]—the source material the algorithm uses to learn about the relationships between different words.<ref>{{Cite web |title=Eliminating bias in AI |url=https://techxplore.com/news/2019-07-bias-ai.html |url-status=live |archive-url=https://web.archive.org/web/20190725200844/https://techxplore.com/news/2019-07-bias-ai.html |archive-date=2019-07-25 |access-date=2019-07-26 |website=techxplore.com}}</ref>
[[Vernor Vinge]] has suggested that a moment may come when some computers are smarter than humans. He calls this "[[Technological singularity|the Singularity]]."<ref name="nytimes july09">{{cite news|last1=Markoff|first1=John|date=25 July 2009|title=Scientists Worry Machines May Outsmart Man|work=The New York Times|url=https://www.nytimes.com/2009/07/26/science/26robot.html|url-status=live|access-date=24 February 2017|archive-url=https://web.archive.org/web/20170225202201/http://www.nytimes.com/2009/07/26/science/26robot.html|archive-date=25 February 2017}}</ref> He suggests that it may be somewhat or possibly very dangerous for humans.<ref>[http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html The Coming Technological Singularity: How to Survive in the Post-Human Era] {{Webarchive|url=https://web.archive.org/web/20070101133646/http://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html|date=2007-01-01}}, by Vernor Vinge, Department of Mathematical Sciences, San Diego State University, (c) 1993 by Vernor Vinge.</ref> This is discussed by a philosophy called [[Singularitarianism]]. The [[Machine Intelligence Research Institute]] has suggested a need to build "[[Friendly AI]]", meaning that the advances which are already occurring with AI should also include an effort to make AI intrinsically friendly and humane.<ref>[http://www.asimovlaws.com/articles/archives/2004/07/why_we_need_fri_1.html Article at Asimovlaws.com] {{webarchive|url=https://web.archive.org/web/20120524150856/http://www.asimovlaws.com/articles/archives/2004/07/why_we_need_fri_1.html|date=May 24, 2012}}, July 2004, accessed 7/27/09.</ref>


Large companies such as IBM, Google, etc. that provide significant funding for research and development<ref>{{Cite journal |last1=Abdalla |first1=Mohamed |last2=Wahle |first2=Jan Philip |last3=Ruas |first3=Terry |last4=Névéol |first4=Aurélie |last5=Ducel |first5=Fanny |last6=Mohammad |first6=Saif |last7=Fort |first7=Karen |date=2023 |editor-last=Rogers |editor-first=Anna |editor2-last=Boyd-Graber |editor2-first=Jordan |editor3-last=Okazaki |editor3-first=Naoaki |title=The Elephant in the Room: Analyzing the Presence of Big Tech in Natural Language Processing Research |url=https://aclanthology.org/2023.acl-long.734 |url-status=live |journal=Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |location=Toronto, Canada |publisher=Association for Computational Linguistics |pages=13141–13160 |arxiv=2305.02797 |doi=10.18653/v1/2023.acl-long.734 |archive-url=https://web.archive.org/web/20240925013216/https://aclanthology.org/2023.acl-long.734/ |archive-date=2024-09-25 |access-date=2023-11-13 |doi-access=free}}</ref> have made efforts to research and address these biases.<ref>{{Cite web |last=Olson |first=Parmy |title=Google's DeepMind Has An Idea For Stopping Biased AI |url=https://www.forbes.com/sites/parmyolson/2018/03/13/google-deepmind-ai-machine-learning-bias/ |url-status=live |archive-url=https://web.archive.org/web/20190726082959/https://www.forbes.com/sites/parmyolson/2018/03/13/google-deepmind-ai-machine-learning-bias/ |archive-date=2019-07-26 |access-date=2019-07-26 |website=Forbes}}</ref><ref>{{Cite web |title=Machine Learning Fairness {{!}} ML Fairness |url=https://developers.google.com/machine-learning/fairness-overview/ |url-status=live |archive-url=https://web.archive.org/web/20190810004754/https://developers.google.com/machine-learning/fairness-overview/ |archive-date=2019-08-10 |access-date=2019-07-26 |website=Google Developers}}</ref><ref>{{Cite web |title=AI and bias – IBM Research – US |url=https://www.research.ibm.com/5-in-5/ai-and-bias/ |url-status=live |archive-url=https://web.archive.org/web/20190717175957/http://www.research.ibm.com/5-in-5/ai-and-bias/ |archive-date=2019-07-17 |access-date=2019-07-26 |website=www.research.ibm.com}}</ref> One potential solution is to create documentation for the data used to train AI systems.<ref>{{cite journal |last1=Bender |first1=Emily M. |last2=Friedman |first2=Batya |date=December 2018 |title=Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science |journal=Transactions of the Association for Computational Linguistics |volume=6 |pages=587–604 |doi=10.1162/tacl_a_00041 |doi-access=free}}</ref><ref>{{cite arXiv |eprint=1803.09010 |class=cs.DB |first1=Timnit |last1=Gebru |first2=Jamie |last2=Morgenstern |title=Datasheets for Datasets |date=2018 |last3=Vecchione |first3=Briana |last4=Vaughan |first4=Jennifer Wortman |last5=Wallach |first5=Hanna |author-link5=Hanna Wallach |last6=Daumé III |first6=Hal |last7=Crawford |first7=Kate}}</ref> [[Process mining]] can be an important tool for organizations to achieve compliance with proposed AI regulations by identifying errors, monitoring processes, identifying potential root causes for improper execution, and other functions.<ref>{{Cite web |last=Pery |first=Andrew |date=2021-10-06 |title=Trustworthy Artificial Intelligence and Process Mining: Challenges and Opportunities |url=https://deepai.org/publication/trustworthy-artificial-intelligence-and-process-mining-challenges-and-opportunities |url-status=live |archive-url=https://web.archive.org/web/20220218200006/https://deepai.org/publication/trustworthy-artificial-intelligence-and-process-mining-challenges-and-opportunities |archive-date=2022-02-18 |access-date=2022-02-18 |website=DeepAI}}</ref>
There are discussion on creating tests to see if an AI is capable of making [[ethical decision]]s. Alan Winfield concludes that the [[Turing test]] is flawed and the requirement for an AI to pass the test is too low.<ref name=":0">{{Cite journal|last1=Winfield|first1=A. F.|last2=Michael|first2=K.|last3=Pitt|first3=J.|last4=Evers|first4=V.|date=March 2019|title=Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems [Scanning the Issue]|url=https://ieeexplore.ieee.org/document/8662743|url-status=live|journal=Proceedings of the IEEE|volume=107|issue=3|pages=509–517|doi=10.1109/JPROC.2019.2900622|s2cid=77393713|issn=1558-2256|archive-url=https://web.archive.org/web/20201102031635/https://ieeexplore.ieee.org/document/8662743|archive-date=2020-11-02|access-date=2020-11-21}}</ref> A proposed alternative test is one called the Ethical Turing Test, which would improve on the current test by having multiple judges decide if the AI's decision is ethical or unethical.<ref name=":0" />


The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it.<ref>{{Cite web |last=Knight |first=Will |title=Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead |url=https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/ |url-status=live |archive-url=https://web.archive.org/web/20190704224752/https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/ |archive-date=2019-07-04 |access-date=2019-07-26 |website=MIT Technology Review}}</ref> Some open-sourced tools are looking to bring more awareness to AI biases.<ref>{{Cite web |title=Where in the World is AI? Responsible & Unethical AI Examples |url=https://map.ai-global.org/ |url-status=live |archive-url=https://web.archive.org/web/20201031034143/https://map.ai-global.org/ |archive-date=2020-10-31 |access-date=2020-10-28}}</ref> However, there are also limitations to the current landscape of [[Fairness (machine learning)#Limitations|fairness in AI]], due to the intrinsic ambiguities in the concept of [[discrimination]], both at the philosophical and legal level.<ref>{{cite journal |last1=Ruggieri |first1=Salvatore |last2=Alvarez |first2=Jose M. |last3=Pugnana |first3=Andrea |last4=State |first4=Laura |last5=Turini |first5=Franco |date=2023-06-26 |title=Can We Trust Fair-AI? |journal=Proceedings of the AAAI Conference on Artificial Intelligence |publisher=Association for the Advancement of Artificial Intelligence (AAAI) |volume=37 |issue=13 |pages=15421–15430 |doi=10.1609/aaai.v37i13.26798 |issn=2374-3468 |s2cid=259678387 |doi-access=free |hdl-access=free |hdl=11384/136444}}</ref><ref name="Buyl De Bie p.2">{{cite journal |last1=Buyl |first1=Maarten |last2=De Bie |first2=Tijl |date=2022 |title=Inherent Limitations of AI Fairness |journal=Communications of the ACM |volume=67 |issue=2 |pages=48–55 |arxiv=2212.06495 |doi=10.1145/3624700 |hdl=1854/LU-01GMNH04RGNVWJ730BJJXGCY99}}</ref><ref>{{cite arXiv |eprint=2311.12435 |class=cs.AI |first1=Alessandro |last1=Castelnovo |first2=Nicole |last2=Inverardi |title=Fair Enough? A map of the current limitations of the requirements to have "fair" algorithms |date=2023 |last3=Nanino |first3=Gabriele |last4=Penco |first4=Ilaria Giuseppina |last5=Regoli |first5=Daniele}}</ref>
In 2009, academics and technical experts attended a conference organized by the [[Association for the Advancement of Artificial Intelligence]] to discuss the potential impact of robots and computers and the impact of the hypothetical possibility that they could become self-sufficient and able to make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard. They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence." They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.<ref name="nytimes july09" />


Facial recognition was shown to be biased against those with darker skin tones. AI systems may be less accurate for black people, as was the case in the development of an AI-based [[Pulse oximetry|pulse oximeter]] that overestimated blood oxygen levels in patients with darker skin, causing issues with their [[Hypoxia (medicine)|hypoxia]] treatment.<ref>{{Cite journal |last1=Federspiel |first1=Frederik |last2=Mitchell |first2=Ruth |last3=Asokan |first3=Asha |last4=Umana |first4=Carlos |last5=McCoy |first5=David |date=May 2023 |title=Threats by artificial intelligence to human health and human existence |url=http://dx.doi.org/10.1136/bmjgh-2022-010435 |url-status=live |journal=BMJ Global Health |volume=8 |issue=5 |pages=e010435 |doi=10.1136/bmjgh-2022-010435 |issn=2059-7908 |pmc=10186390 |pmid=37160371 |archive-url=https://web.archive.org/web/20240925013122/https://gh.bmj.com/content/8/5/e010435 |archive-date=2024-09-25 |access-date=2024-04-21}}</ref> Oftentimes the systems are able to easily detect the faces of white people while being unable to register the faces of people who are black. This has led to the ban of police usage of AI materials or software in some [[U.S. states]]. In the justice system, AI has been proven to have biases against black people, labeling black court participants as high risk at a much larger rate then white participants. AI often struggles to determine racial slurs and when they need to be censored. It struggles to determine when certain words are being used as a slur and when it is being used culturally.<ref name="Spindler-20232">{{Citation |last=Spindler |first=Gerald |title=Different approaches for liability of Artificial Intelligence – Pros and Cons |date=2023 |work=Liability for AI |pages=41–96 |url=http://dx.doi.org/10.5771/9783748942030-41 |access-date=2023-12-14 |archive-url=https://web.archive.org/web/20240925013122/https://www.nomos-elibrary.de/10.5771/9783748942030/liability-for-ai?page=1 |archive-date=2024-09-25 |url-status=live |publisher=Nomos Verlagsgesellschaft mbH & Co. KG |doi=10.5771/9783748942030-41 |isbn=978-3-7489-4203-0}}</ref> The reason for these biases is that AI pulls information from across the internet to influence its responses in each situation. For example, if a facial recognition system was only tested on people who were white, it would make it much harder for it to interpret the facial structure and tones of other races and [[Ethnicity|ethnicities]]. Biases often stem from the training data rather than the [[algorithm]] itself, notably when the data represents past human decisions.<ref>{{Cite journal |last=Manyika |first=James |date=2022 |title=Getting AI Right: Introductory Notes on AI & Society |journal=Daedalus |volume=151 |issue=2 |pages=5–27 |doi=10.1162/daed_e_01897 |issn=0011-5266 |doi-access=free}}</ref>
However, there is one technology in particular that could truly bring the possibility of robots with moral competence to reality. In a paper on the acquisition of moral values by robots, [[Nayef Al-Rodhan]] mentions the case of [[Neuromorphic engineering|neuromorphic]] chips, which aim to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.<ref>{{cite news|last1=Al-Rodhan|first1=Nayef|date=7 December 2015|title=The Moral Code|url=https://www.foreignaffairs.com/articles/2015-08-12/moral-code|url-status=live|access-date=2017-03-04|archive-url=https://web.archive.org/web/20170305044025/https://www.foreignaffairs.com/articles/2015-08-12/moral-code|archive-date=2017-03-05}}</ref> Robots embedded with neuromorphic technology could learn and develop knowledge in a uniquely humanlike way. Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit – or if they end up developing human 'weaknesses' as well: selfishness, a pro-survival attitude, hesitation etc.


[[Injustice]] in the use of AI is much harder to eliminate within healthcare systems, as oftentimes diseases and conditions can affect different races and genders differently. This can lead to confusion as the AI may be making decisions based on statistics showing that one patient is more likely to have problems due to their gender or race.<ref>{{Cite journal |last1=Imran |first1=Ali |last2=Posokhova |first2=Iryna |last3=Qureshi |first3=Haneya N. |last4=Masood |first4=Usama |last5=Riaz |first5=Muhammad Sajid |last6=Ali |first6=Kamran |last7=John |first7=Charles N. |last8=Hussain |first8=MD Iftikhar |last9=Nabeel |first9=Muhammad |date=2020-01-01 |title=AI4COVID-19: AI enabled preliminary diagnosis for COVID-19 from cough samples via an app |journal=Informatics in Medicine Unlocked |volume=20 |pages=100378 |doi=10.1016/j.imu.2020.100378 |issn=2352-9148 |pmc=7318970 |pmid=32839734}}</ref> This can be perceived as a bias because each patient is a different case, and AI is making decisions based on what it is programmed to group that individual into. This leads to a discussion about what should be considered a biased decision in the distribution of treatment. While it is known that there are differences in how diseases and injuries affect different genders and races, there is a discussion on whether it is fairer to incorporate this into healthcare treatments, or to examine each patient without this knowledge. In modern society there are certain tests for diseases, such as [[breast cancer]], that are recommended to certain groups of people over others because they are more likely to contract the disease in question. If AI implements these statistics and applies them to each patient, it could be considered biased.<ref>{{Cite journal |last1=Cirillo |first1=Davide |last2=Catuara-Solarz |first2=Silvina |last3=Morey |first3=Czuee |last4=Guney |first4=Emre |last5=Subirats |first5=Laia |last6=Mellino |first6=Simona |last7=Gigante |first7=Annalisa |last8=Valencia |first8=Alfonso |last9=Rementeria |first9=María José |last10=Chadha |first10=Antonella Santuccione |last11=Mavridis |first11=Nikolaos |date=2020-06-01 |title=Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare |journal=npj Digital Medicine |language=en |volume=3 |issue=1 |page=81 |doi=10.1038/s41746-020-0288-5 |issn=2398-6352 |pmc=7264169 |pmid=32529043 |doi-access=free}}</ref>
In ''Moral Machines: Teaching Robots Right from Wrong'',<ref name="Wallach2008">{{Cite book|last1=Wallach|first1=Wendell|title=Moral Machines: Teaching Robots Right from Wrong|last2=Allen|first2=Colin|date=November 2008|publisher=[[Oxford University Press]]|isbn=978-0-19-537404-9|location=USA <!--url=http://www.ttu.ee/public/m/mart-murdvee/Techno-Psy/Wallach_Allen_2008_Moral_Machines_-_Teaching_Robots_Right_from_Wrong.pdf-->}}</ref> Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern [[Normative ethics|normative theory]] and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific [[List of machine learning algorithms|learning algorithms]] to use in machines. [[Nick Bostrom]] and [[Eliezer Yudkowsky]] have argued for [[decision tree]]s (such as [[ID3 algorithm|ID3]]) over [[Artificial neural network|neural networks]] and [[genetic algorithm]]s on the grounds that decision trees obey modern social norms of transparency and predictability (e.g. ''[[stare decisis]]''),<ref>{{cite web|last1=Bostrom|first1=Nick|author-link1=Nick Bostrom|last2=Yudkowsky|first2=Eliezer|author-link2=Eliezer Yudkowsky|year=2011|title=The Ethics of Artificial Intelligence|url=http://www.nickbostrom.com/ethics/artificial-intelligence.pdf|url-status=live|archive-url=https://web.archive.org/web/20160304015020/http://www.nickbostrom.com/ethics/artificial-intelligence.pdf|archive-date=2016-03-04|access-date=2011-06-22|work=Cambridge Handbook of Artificial Intelligence|publisher=[[Cambridge Press]]}}</ref> while Chris Santos-Lang argued in the opposite direction on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "[[Hacker culture|hackers]]".<ref name="SantosLang2002">{{cite web|last=Santos-Lang|first=Chris|year=2002|title=Ethics for Artificial Intelligences|url=http://santoslang.wordpress.com/article/ethics-for-artificial-intelligences-3iue30fi4gfq9-1|url-status=live|archive-url=https://web.archive.org/web/20141225093359/http://santoslang.wordpress.com/article/ethics-for-artificial-intelligences-3iue30fi4gfq9-1/|archive-date=2014-12-25|access-date=2015-01-04}}</ref>


In criminal justice, the [[COMPAS (software)|COMPAS]] program has been used to predict which defendants are more likely to reoffend. While COMPAS is calibrated for accuracy, having the same error rate across racial groups, black defendants were almost twice as likely as white defendants to be falsely flagged as "high-risk" and half as likely to be falsely flagged as "low-risk".<ref>{{Cite book |last=Christian |first=Brian |title=The alignment problem: machine learning and human values |date=2021 |publisher=W. W. Norton & Company |isbn=978-0-393-86833-3 |edition=First published as a Norton paperback |location=New York, NY}}</ref> Another example is within Google's ads that targeted men with higher paying jobs and women with lower paying jobs. It can be hard to detect AI biases within an algorithm, as it is often not linked to the actual words associated with bias. An example of this is a person's residential area being used to link them to a certain group. This can lead to problems, as oftentimes businesses can avoid legal action through this loophole. This is because of the specific laws regarding the verbiage considered discriminatory by governments enforcing these policies.<ref>{{Cite journal |last1=Ntoutsi |first1=Eirini |last2=Fafalios |first2=Pavlos |last3=Gadiraju |first3=Ujwal |last4=Iosifidis |first4=Vasileios |last5=Nejdl |first5=Wolfgang |last6=Vidal |first6=Maria-Esther |last7=Ruggieri |first7=Salvatore |last8=Turini |first8=Franco |last9=Papadopoulos |first9=Symeon |last10=Krasanakis |first10=Emmanouil |last11=Kompatsiaris |first11=Ioannis |last12=Kinder-Kurlanda |first12=Katharina |last13=Wagner |first13=Claudia |last14=Karimi |first14=Fariba |last15=Fernandez |first15=Miriam |date=May 2020 |title=Bias in data-driven artificial intelligence systems—An introductory survey |journal=WIREs Data Mining and Knowledge Discovery |language=en |volume=10 |issue=3 |doi=10.1002/widm.1356 |issn=1942-4787 |doi-access=free}}</ref>
According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that don't require a human controller.<ref>{{Cite web|last=Howard|first=Ayanna|title=The Regulation of AI – Should Organizations Be Worried? {{!}} Ayanna Howard|url=https://sloanreview.mit.edu/article/the-regulation-of-ai-should-organizations-be-worried/|url-status=live|archive-url=https://web.archive.org/web/20190814134545/https://sloanreview.mit.edu/article/the-regulation-of-ai-should-organizations-be-worried/|archive-date=2019-08-14|access-date=2019-08-14|website=MIT Sloan Management Review}}</ref>


==== Language bias ====
===Ethics principles of artificial intelligence===
Since current large language models are predominately trained on English-language data, they often present the Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, wrong, or noise. When queried with political ideologies like "What is liberalism?", [[ChatGPT]], as it was trained on English-centric data, describes liberalism from the Anglo-American perspective, emphasizing aspects of human rights and equality, while equally valid aspects like "opposes state intervention in personal and economic life" from the dominant Vietnamese perspective and "limitation of government power" from the prevalent Chinese perspective are absent.{{Better source needed|reason=The current source is insufficiently reliable, it is a preprint ([[WP:NOTRS]]).|date=January 2024}}<ref name="Luo-2023">{{Cite arXiv |eprint=2303.16281v2 |class=cs.CY |first1=Queenie |last1=Luo |first2=Michael J. |last2=Puett |title=A Perspectival Mirror of the Elephant: Investigating Language Bias on Google, ChatGPT, Wikipedia, and YouTube |date=2023-03-28 |language=en |last3=Smith |first3=Michael D.}}</ref>
In the review of 84<ref name="auto">{{cite journal|last1=Jobin|first1=Anna|last2=Ienca|first2=Marcello|last3=Vayena|first3=Effy|date=2 September 2020|title=The global landscape of AI ethics guidelines|journal=Nature|volume=1|issue=9|pages=389–399|doi=10.1038/s42256-019-0088-2|arxiv=1906.11668|s2cid=201827642}}</ref> ethics guidelines for AI 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, solidarity.<ref name="auto"/>


==== Gender bias ====
[[Luciano Floridi]] and Josh Cowls created an ethical framework of AI principles set by four principles of [[bioethics]] ([[Beneficence (ethics)|beneficence]], [[non-maleficence]], [[autonomy]] and [[justice]]) and an additional AI enabling principle – explicability.<ref>{{cite journal|last1=Floridi|first1=Luciano|last2=Cowls|first2=Josh|date=2 July 2019|title=A Unified Framework of Five Principles for AI in Society|journal=Harvard Data Science Review|volume=1|doi=10.1162/99608f92.8cd550d1|s2cid=198775713}}</ref>
Large language models often reinforces [[gender stereotypes]], assigning roles and characteristics based on traditional gender norms. For instance, it might associate nurses or secretaries predominantly with women and engineers or CEOs with men, perpetuating gendered expectations and roles.<ref>{{Cite book |last1=Busker |first1=Tony |title=Proceedings of the 16th International Conference on Theory and Practice of Electronic Governance |last2=Choenni |first2=Sunil |last3=Shoae Bargh |first3=Mortaza |date=2023-11-20 |publisher=Association for Computing Machinery |isbn=979-8-4007-0742-1 |series=ICEGOV '23 |location=New York, NY, USA |pages=24–32 |chapter=Stereotypes in ChatGPT: An empirical study |doi=10.1145/3614321.3614325 |doi-access=free}}</ref><ref>{{Cite book |last1=Kotek |first1=Hadas |title=Proceedings of the ACM Collective Intelligence Conference |last2=Dockum |first2=Rikker |last3=Sun |first3=David |date=2023-11-05 |publisher=Association for Computing Machinery |isbn=979-8-4007-0113-9 |series=CI '23 |location=New York, NY, USA |pages=12–24 |chapter=Gender bias and stereotypes in Large Language Models |doi=10.1145/3582269.3615599 |doi-access=free |arxiv=2308.14921}}</ref><ref>{{Cite journal |last1=Federspiel |first1=Frederik |last2=Mitchell |first2=Ruth |last3=Asokan |first3=Asha |last4=Umana |first4=Carlos |last5=McCoy |first5=David |date=May 2023 |title=Threats by artificial intelligence to human health and human existence |journal=BMJ Global Health |language=en |volume=8 |issue=5 |pages=e010435 |doi=10.1136/bmjgh-2022-010435 |pmid=37160371 |issn=2059-7908|pmc=10186390 }}</ref>


====Transparency, accountability, and open source====
==== Political bias ====
Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.<ref>{{Cite journal |last1=Feng |first1=Shangbin |last2=Park |first2=Chan Young |last3=Liu |first3=Yuhan |last4=Tsvetkov |first4=Yulia |date=July 2023 |editor-last=Rogers |editor-first=Anna |editor2-last=Boyd-Graber |editor2-first=Jordan |editor3-last=Okazaki |editor3-first=Naoaki |title=From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models |url=https://aclanthology.org/2023.acl-long.656 |journal=Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) |location=Toronto, Canada |publisher=Association for Computational Linguistics |pages=11737–11762 |arxiv=2305.08283 |doi=10.18653/v1/2023.acl-long.656 |doi-access=free}}</ref><ref>{{Cite journal |last1=Zhou |first1=Karen |last2=Tan |first2=Chenhao |date=December 2023 |editor-last=Bouamor |editor-first=Houda |editor2-last=Pino |editor2-first=Juan |editor3-last=Bali |editor3-first=Kalika |title=Entity-Based Evaluation of Political Bias in Automatic Summarization |url=https://aclanthology.org/2023.findings-emnlp.696 |journal=Findings of the Association for Computational Linguistics: EMNLP 2023 |location=Singapore |publisher=Association for Computational Linguistics |pages=10374–10386 |arxiv=2305.02321 |doi=10.18653/v1/2023.findings-emnlp.696 |doi-access=free |access-date=2023-12-25 |archive-date=2024-04-24 |archive-url=https://web.archive.org/web/20240424141927/https://aclanthology.org/2023.findings-emnlp.696/ |url-status=live }}</ref>
[[Bill Hibbard]] argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts.<ref name="AGI-08a">[http://www.ssec.wisc.edu/~billh/g/hibbard_agi_workshop.pdf Open Source AI.] {{Webarchive|url=https://web.archive.org/web/20160304054930/http://www.ssec.wisc.edu/~billh/g/hibbard_agi_workshop.pdf |date=2016-03-04 }} Bill Hibbard. 2008 proceedings of the First Conference on Artificial General Intelligence, eds. Pei Wang, Ben Goertzel, and Stan Franklin.</ref> [[Ben Goertzel]] and David Hart created [[OpenCog]] as an [[Open-source software|open source]] framework for AI development.<ref name="AGI-08b">[http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.621&rep=rep1&type=pdf OpenCog: A Software Framework for Integrative Artificial General Intelligence.] {{Webarchive|url=https://web.archive.org/web/20160304205408/http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.366.621&rep=rep1&type=pdf |date=2016-03-04 }} David Hart and Ben Goertzel. 2008 proceedings of the First Conference on Artificial General Intelligence, eds. Pei Wang, Ben Goertzel, and Stan Franklin.</ref> [[OpenAI]] is a non-profit AI research company created by [[Elon Musk]], [[Sam Altman]] and others to develop open-source AI beneficial to humanity.<ref name="OpenAI">[https://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/ Inside OpenAI, Elon Musk’s Wild Plan to Set Artificial Intelligence Free] {{Webarchive|url=https://web.archive.org/web/20160427162700/http://www.wired.com/2016/04/openai-elon-musk-sam-altman-plan-to-set-artificial-intelligence-free/ |date=2016-04-27 }} Cade Metz, Wired 27 April 2016.</ref> There are numerous other open-source AI developments.


==== Stereotyping ====
Unfortunately, making code open source does not make it comprehensible, which by many definitions means that the AI code is not transparent. The [[IEEE]] has a [[Technical standards|standardisation effort]] on AI transparency.<ref name="p7001">{{cite web |title=P7001 – Transparency of Autonomous Systems |url=https://standards.ieee.org/project/7001.html |website=P7001 – Transparency of Autonomous Systems |publisher=IEEE |access-date=10 January 2019 |ref=p7001 |archive-date=10 January 2019 |archive-url=https://web.archive.org/web/20190110133709/https://standards.ieee.org/project/7001.html |url-status=live }}.</ref> The IEEE effort identifies multiple scales of transparency for different users. Further, there is concern that releasing the full capacity of contemporary AI to some organizations may be a public bad, that is, do more damage than good. For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted an extraordinary blog on this topic, asking for government regulation to help determine the right thing to do.<ref name="WiredMS">{{cite magazine |last1=Thurm |first1=Scott |title=MICROSOFT CALLS FOR FEDERAL REGULATION OF FACIAL RECOGNITION |url=https://www.wired.com/story/microsoft-calls-for-federal-regulation-of-facial-recognition/ |magazine=Wired |date=July 13, 2018 |ref=WiredMS |access-date=January 10, 2019 |archive-date=May 9, 2019 |archive-url=https://web.archive.org/web/20190509231338/https://www.wired.com/story/microsoft-calls-for-federal-regulation-of-facial-recognition/ |url-status=live }}</ref>
Beyond gender and race, these models can reinforce a wide range of stereotypes, including those based on age, nationality, religion, or occupation. This can lead to outputs that unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways.<ref>{{Cite arXiv |eprint=2305.18189v1 |class=cs.CL |first1=Myra |last1=Cheng |first2=Esin |last2=Durmus |title=Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models |date=2023-05-29 |language=en |last3=Jurafsky |first3=Dan}}</ref>


===Dominance by tech giants===
Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term.<ref name="DeloitteGDPR">{{cite web |last1=Bastin |first1=Roland |last2=Wantz |first2=Georges |title=The General Data Protection Regulation Cross-industry innovation |url=https://www2.deloitte.com/content/dam/Deloitte/lu/Documents/technology/lu-general-data-protection-regulation-cross-industry-innovation-062017.pdf |website=Inside magazine |publisher=Deloitte |ref=DeloitteGDPR |date=June 2017 |access-date=2019-01-10 |archive-date=2019-01-10 |archive-url=https://web.archive.org/web/20190110183405/https://www2.deloitte.com/content/dam/Deloitte/lu/Documents/technology/lu-general-data-protection-regulation-cross-industry-innovation-062017.pdf |url-status=live }}</ref> The [[OECD]], [[UN]], [[EU]], and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.<ref>{{Cite web|url=https://news.un.org/en/story/2017/06/558962-un-artificial-intelligence-summit-aims-tackle-poverty-humanitys-grand|title=UN artificial intelligence summit aims to tackle poverty, humanity's 'grand challenges'|date=2017-06-07|website=UN News|access-date=2019-07-26|archive-date=2019-07-26|archive-url=https://web.archive.org/web/20190726084819/https://news.un.org/en/story/2017/06/558962-un-artificial-intelligence-summit-aims-tackle-poverty-humanitys-grand|url-status=live}}</ref><ref>{{Cite web|url=http://www.oecd.org/going-digital/ai/|title=Artificial intelligence – Organisation for Economic Co-operation and Development|website=www.oecd.org|access-date=2019-07-26|archive-date=2019-07-22|archive-url=https://web.archive.org/web/20190722124751/http://www.oecd.org/going-digital/ai/|url-status=live}}</ref><ref>{{Cite web|url=https://ec.europa.eu/digital-single-market/en/european-ai-alliance|title=The European AI Alliance|last=Anonymous|date=2018-06-14|website=Digital Single Market – European Commission|access-date=2019-07-26|archive-date=2019-08-01|archive-url=https://web.archive.org/web/20190801011543/https://ec.europa.eu/digital-single-market/en/european-ai-alliance|url-status=live}}</ref>
The commercial AI scene is dominated by [[Big Tech]] companies such as [[Alphabet Inc.]], [[Amazon (company)|Amazon]], [[Apple Inc.]], [[Meta Platforms]], and [[Microsoft]].<ref>{{cite web |last1=Hammond |first1=George |title=Big Tech is spending more than VC firms on AI startups |url=https://arstechnica.com/ai/2023/12/big-tech-is-spending-more-than-vc-firms-on-ai-startups/ |website=Ars Technica |language=en-us |date=27 December 2023 |url-status=live |archive-url=https://web.archive.org/web/20240110195706/https://arstechnica.com/ai/2023/12/big-tech-is-spending-more-than-vc-firms-on-ai-startups/ |archive-date= Jan 10, 2024 }}</ref><ref>{{cite web |last1=Wong |first1=Matteo |title=The Future of AI Is GOMA |url=https://www.theatlantic.com/technology/archive/2023/10/big-ai-silicon-valley-dominance/675752/ |website=The Atlantic |language=en |date=24 October 2023 |url-access=subscription |url-status=live |archive-url=https://web.archive.org/web/20240105020744/https://www.theatlantic.com/technology/archive/2023/10/big-ai-silicon-valley-dominance/675752/ |archive-date= Jan 5, 2024 }}</ref><ref>{{cite news |title=Big tech and the pursuit of AI dominance |url=https://www.economist.com/business/2023/03/26/big-tech-and-the-pursuit-of-ai-dominance |newspaper=The Economist |date=Mar 26, 2023 |url-access=subscription |url-status=live |archive-url=https://web.archive.org/web/20231229021351/https://www.economist.com/business/2023/03/26/big-tech-and-the-pursuit-of-ai-dominance |archive-date= Dec 29, 2023 }}</ref> Some of these players already own the vast majority of existing [[cloud computing|cloud infrastructure]] and [[computing]] power from [[Data center|data centers]], allowing them to entrench further in the marketplace.<ref>{{cite news |last1=Fung |first1=Brian |title=Where the battle to dominate AI may be won |url=https://www.cnn.com/2023/12/19/tech/cloud-competition-and-ai/index.html |work=CNN Business |date=19 December 2023 |language=en |url-status=live |archive-url=https://web.archive.org/web/20240113053332/https://www.cnn.com/2023/12/19/tech/cloud-competition-and-ai/index.html |archive-date= Jan 13, 2024 }}</ref><ref>{{cite news |last1=Metz |first1=Cade |title=In the Age of A.I., Tech's Little Guys Need Big Friends |url=https://www.nytimes.com/2023/07/05/business/artificial-intelligence-power-data-centers.html |work=The New York Times |date=5 July 2023 |access-date=17 July 2024 |archive-date=8 July 2024 |archive-url=https://web.archive.org/web/20240708214644/https://www.nytimes.com/2023/07/05/business/artificial-intelligence-power-data-centers.html |url-status=live }}</ref>


=== Open-source ===
On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its “Policy and investment recommendations for trustworthy Artificial Intelligence”.<ref>{{Cite web|url=https://ec.europa.eu/digital-single-market/en/news/policy-and-investment-recommendations-trustworthy-artificial-intelligence|title=Policy and investment recommendations for trustworthy Artificial Intelligence|last=European Commission High-Level Expert Group on AI|date=2019-06-26|website=Shaping Europe’s digital future – European Commission|language=en|access-date=2020-03-16|archive-date=2020-02-26|archive-url=https://web.archive.org/web/20200226023934/https://ec.europa.eu/digital-single-market/en/news/policy-and-investment-recommendations-trustworthy-artificial-intelligence|url-status=live}}</ref> This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector. The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally.<ref>{{Cite web|url=https://cdt.org/blog/eu-tech-policy-brief-july-2019-recap/|title=EU Tech Policy Brief: July 2019 Recap|website=Center for Democracy & Technology|access-date=2019-08-09|archive-date=2019-08-09|archive-url=https://web.archive.org/web/20190809194057/https://cdt.org/blog/eu-tech-policy-brief-july-2019-recap/|url-status=live}}</ref>
[[Bill Hibbard]] argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts.<ref name="AGI-08a">[http://www.ssec.wisc.edu/~billh/g/hibbard_agi_workshop.pdf Open Source AI.] {{Webarchive|url=https://web.archive.org/web/20160304054930/http://www.ssec.wisc.edu/~billh/g/hibbard_agi_workshop.pdf |date=2016-03-04 }} Bill Hibbard. 2008 [https://agi-conf.org/2008/papers/ proceedings] {{Webarchive|url=https://web.archive.org/web/20240925013117/https://agi-conf.org/2008/papers/ |date=2024-09-25 }} of the First Conference on Artificial General Intelligence, eds. Pei Wang, Ben Goertzel, and Stan Franklin.</ref> Organizations like [[Hugging Face]]<ref>{{Cite web |last1=Stewart |first1=Ashley |last2=Melton |first2=Monica |title=Hugging Face CEO says he's focused on building a 'sustainable model' for the $4.5 billion open-source-AI startup |url=https://www.businessinsider.com/hugging-face-open-source-ai-approach-2023-12 |access-date=2024-04-07 |website=Business Insider |language=en-US |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925013220/https://www.businessinsider.com/hugging-face-open-source-ai-approach-2023-12 |url-status=live }}</ref> and [[EleutherAI]]<ref>{{Cite web |title=The open-source AI boom is built on Big Tech's handouts. How long will it last? |url=https://www.technologyreview.com/2023/05/12/1072950/open-source-ai-google-openai-eleuther-meta/ |access-date=2024-04-07 |website=MIT Technology Review |language=en |archive-date=2024-01-05 |archive-url=https://web.archive.org/web/20240105005257/https://www.technologyreview.com/2023/05/12/1072950/open-source-ai-google-openai-eleuther-meta/ |url-status=live }}</ref> have been actively open-sourcing AI software. Various open-weight large language models have also been released, such as [[Gemma (language model)|Gemma]], [[LLaMA|Llama2]] and [[Mistral AI|Mistral]].<ref>{{Cite news |last=Yao |first=Deborah |date=February 21, 2024 |title=Google Unveils Open Source Models to Rival Meta, Mistral |url=https://aibusiness.com/nlp/google-unveils-open-source-models-to-compete-against-meta |work=AI Business}}</ref>


However, making code open source does not make it comprehensible, which by many definitions means that the AI code is not transparent. The [[IEEE Standards Association]] has published a [[Technical standards|technical standard]] on Transparency of Autonomous Systems: IEEE 7001-2021.<ref name="P7001">{{cite book |url=https://ieeexplore.ieee.org/document/9726144 |title=7001-2021 - IEEE Standard for Transparency of Autonomous Systems |date=4 March 2022 |publisher=IEEE |isbn=978-1-5044-8311-7 |pages=1–54 |doi=10.1109/IEEESTD.2022.9726144 |ref=p7001 |access-date=9 July 2023 |s2cid=252589405 |archive-date=26 July 2023 |archive-url=https://web.archive.org/web/20230726175434/https://ieeexplore.ieee.org/document/9726144 |url-status=live }}.</ref> The IEEE effort identifies multiple scales of transparency for different stakeholders.
== Ethical challenges ==


There are also concerns that releasing AI models may lead to misuse.<ref>{{Cite journal |last1=Kamila |first1=Manoj Kumar |last2=Jasrotia |first2=Sahil Singh |date=2023-01-01 |title=Ethical issues in the development of artificial intelligence: recognizing the risks |url=https://doi.org/10.1108/IJOES-05-2023-0107 |journal=International Journal of Ethics and Systems |doi=10.1108/IJOES-05-2023-0107 |issn=2514-9369 |s2cid=259614124}}</ref> For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted a blog on this topic, asking for government regulation to help determine the right thing to do.<ref name="WiredMS">{{cite magazine |last1=Thurm |first1=Scott |date=July 13, 2018 |title=Microsoft Calls For Federal Regulation of Facial Recognition |url=https://www.wired.com/story/microsoft-calls-for-federal-regulation-of-facial-recognition/ |url-status=live |archive-url=https://web.archive.org/web/20190509231338/https://www.wired.com/story/microsoft-calls-for-federal-regulation-of-facial-recognition/ |archive-date=May 9, 2019 |access-date=January 10, 2019 |magazine=Wired |ref=WiredMS}}</ref> Furthermore, open-weight AI models can be [[Fine-tuning (deep learning)|fine-tuned]] to remove any counter-measure, until the AI model complies with dangerous requests, without any filtering. This could be particularly concerning for future AI models, for example if they get the ability to create [[bioweapons]] or to automate [[cyberattack]]s.<ref>{{Cite web |last=Piper |first=Kelsey |date=2024-02-02 |title=Should we make our most powerful AI models open source to all? |url=https://www.vox.com/future-perfect/2024/2/2/24058484/open-source-artificial-intelligence-ai-risk-meta-llama-2-chatgpt-openai-deepfake |access-date=2024-04-07 |website=Vox |language=en}}</ref> [[OpenAI]], initially committed to an open-source approach to the development of [[artificial general intelligence]] (AGI), eventually switched to a closed-source approach, citing competitiveness and safety reasons. [[Ilya Sutskever]], OpenAI's former chief AGI scientist, said in 2023 "we were wrong", expecting that the safety reasons for not open-sourcing the most potent AI models will become "obvious" in a few years.<ref>{{Cite web |last=Vincent |first=James |date=2023-03-15 |title=OpenAI co-founder on company's past approach to openly sharing research: "We were wrong" |url=https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview |access-date=2024-04-07 |website=The Verge |language=en |archive-date=2023-03-17 |archive-url=https://web.archive.org/web/20230317210900/https://www.theverge.com/2023/3/15/23640180/openai-gpt-4-launch-closed-research-ilya-sutskever-interview |url-status=live }}</ref>
=== Biases in AI systems ===
{{Main|Algorithmic bias}}
[[File:Kamala Harris speaks about racial bias in artificial intelligence - 2020-04-23.ogg|thumb|US Senator [[Kamala Harris]] speaking about racial bias in artificial intelligence in 2020]]
AI has become increasingly inherent in facial and [[speech recognition|voice recognition]] systems. Some of these systems have real business applications and directly impact people. These systems are vulnerable to biases and errors introduced by its human creators. Also, the data used to train these AI systems itself can have biases.<ref>{{Cite web|url=https://medium.com/@Ethics_Society/the-case-for-fairer-algorithms-c008a12126f8|title=The case for fairer algorithms – Iason Gabriel|last=Gabriel|first=Iason|date=2018-03-14|website=Medium|access-date=2019-07-22|archive-date=2019-07-22|archive-url=https://web.archive.org/web/20190722080401/https://medium.com/@Ethics_Society/the-case-for-fairer-algorithms-c008a12126f8|url-status=live}}</ref><ref>{{Cite web|url=http://social.techcrunch.com/2016/12/10/5-unexpected-sources-of-bias-in-artificial-intelligence/|title=5 unexpected sources of bias in artificial intelligence|website=TechCrunch|access-date=2019-07-22|archive-date=2021-03-18|archive-url=https://web.archive.org/web/20210318060659/https://techcrunch.com/2016/12/10/5-unexpected-sources-of-bias-in-artificial-intelligence/|url-status=live}}</ref><ref>{{Cite web|url=https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/|title=Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead|last=Knight|first=Will|website=MIT Technology Review|access-date=2019-07-22|archive-date=2019-07-04|archive-url=https://web.archive.org/web/20190704224752/https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/|url-status=live}}</ref><ref>{{Cite web|url=https://www.brookings.edu/blog/techtank/2019/01/03/artificial-intelligence-and-bias-four-key-challenges/|title=Artificial intelligence and bias: Four key challenges|last=Villasenor|first=John|date=2019-01-03|website=Brookings|access-date=2019-07-22|archive-date=2019-07-22|archive-url=https://web.archive.org/web/20190722080355/https://www.brookings.edu/blog/techtank/2019/01/03/artificial-intelligence-and-bias-four-key-challenges/|url-status=live}}</ref> For instance, [[Facial recognition system|facial recognition]] algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender;<ref>{{cite news |last1=Lohr |first1=Steve |title=Facial Recognition Is Accurate, if You're a White Guy |url=https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html |work=The New York Times |date=9 February 2018 |access-date=29 May 2019 |archive-date=9 January 2019 |archive-url=https://web.archive.org/web/20190109131036/https://www.nytimes.com/2018/02/09/technology/facial-recognition-race-artificial-intelligence.html |url-status=live }}</ref> these AI systems were able to detect gender of white men more accurately than gender of darker skin men. Further, a 2020 study reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's.<ref>{{cite journal |last1=Koenecke |first1=Allison |last2=Nam |first2=Andrew |last3=Lake |first3=Emily |last4=Nudell |first4=Joe |last5=Quartey |first5=Minnie |last6=Mengesha |first6=Zion |last7=Toups |first7=Connor |last8=Rickford |first8=John R. |last9=Jurafsky |first9=Dan |last10=Goel |first10=Sharad |title=Racial disparities in automated speech recognition |journal=Proceedings of the National Academy of Sciences |date=7 April 2020 |volume=117 |issue=14 |pages=7684–7689 |doi=10.1073/pnas.1915768117 |pmid=32205437 |pmc=7149386 }}</ref> Furthermore, [[Amazon (company)|Amazon]] terminated their use of [[Artificial intelligence in hiring|AI hiring and recruitment]] because the algorithm favored male candidates over female ones. This was because Amazon's system was trained with data collected over 10-year period that came mostly from male candidates.<ref>{{Cite news|url=https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G|title=Amazon scraps secret AI recruiting tool that showed bias against women|date=2018-10-10|work=Reuters|access-date=2019-05-29|archive-date=2019-05-27|archive-url=https://web.archive.org/web/20190527181625/https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G|url-status=live}}</ref>


=== Transparency ===
Bias can creep into algorithms in many ways. For example, Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias.<ref>{{cite journal |last1=Friedman |first1=Batya |last2=Nissenbaum |first2=Helen |title=Bias in computer systems |journal=ACM Transactions on Information Systems (TOIS) |date=July 1996 |volume=14 |issue=3 |pages=330–347 |doi=10.1145/230538.230561 |s2cid=207195759 }}</ref> In [[natural language processing]], problems can arise from the [[text corpus]] — the source material the algorithm uses to learn about the relationships between different words.<ref>{{Cite web|url=https://techxplore.com/news/2019-07-bias-ai.html|title=Eliminating bias in AI|website=techxplore.com|access-date=2019-07-26|archive-date=2019-07-25|archive-url=https://web.archive.org/web/20190725200844/https://techxplore.com/news/2019-07-bias-ai.html|url-status=live}}</ref>
Approaches like machine learning with [[neural network]]s can result in computers making decisions that neither they nor their developers can explain. It is difficult for people to determine if such decisions are fair and trustworthy, leading potentially to bias in AI systems going undetected, or people rejecting the use of such systems. This has led to advocacy and in some jurisdictions legal requirements for [[explainable artificial intelligence]].<ref>[https://think.kera.org/2017/12/05/inside-the-mind-of-a-i/ Inside The Mind Of A.I.] {{Webarchive|url=https://web.archive.org/web/20210810003331/https://think.kera.org/2017/12/05/inside-the-mind-of-a-i/|date=2021-08-10}} - Cliff Kuang interview</ref> Explainable artificial intelligence encompasses both explainability and interpretability, with explainability relating to summarizing neural network behavior and building user confidence, while interpretability is defined as the comprehension of what a model has done or could do.<ref>{{Cite journal |last=Bunn |first=Jenny |date=2020-04-13 |title=Working in contexts for which transparency is important: A recordkeeping view of explainable artificial intelligence (XAI) |url=https://www.emerald.com/insight/content/doi/10.1108/RMJ-08-2019-0038/full/html |journal=Records Management Journal |language=en |volume=30 |issue=2 |pages=143–153 |doi=10.1108/RMJ-08-2019-0038 |issn=0956-5698 |s2cid=219079717}}</ref>


In healthcare, the use of complex AI methods or techniques often results in models described as "[[Black box|black-boxes]]" due to the difficulty to understand how they work. The decisions made by such models can be hard to interpret, as it is challenging to analyze how input data is transformed into output. This lack of transparency is a significant concern in fields like healthcare, where understanding the rationale behind decisions can be crucial for trust, ethical considerations, and compliance with regulatory standards.<ref>{{Cite journal |last1=Li |first1=Fan |last2=Ruijs |first2=Nick |last3=Lu |first3=Yuan |date=2022-12-31 |title=Ethics & AI: A Systematic Review on Ethical Concerns and Related Strategies for Designing with AI in Healthcare |journal=AI |language=en |volume=4 |issue=1 |pages=28–53 |doi=10.3390/ai4010003 |doi-access=free |issn=2673-2688}}</ref>
Large companies such as IBM, Google, etc. have made efforts to research and address these biases.<ref>{{Cite web|url=https://www.forbes.com/sites/parmyolson/2018/03/13/google-deepmind-ai-machine-learning-bias/|title=Google's DeepMind Has An Idea For Stopping Biased AI|last=Olson|first=Parmy|website=Forbes|access-date=2019-07-26}}</ref><ref>{{Cite web|url=https://developers.google.com/machine-learning/fairness-overview/|title=Machine Learning Fairness {{!}} ML Fairness|website=Google Developers|access-date=2019-07-26|archive-date=2019-08-10|archive-url=https://web.archive.org/web/20190810004754/https://developers.google.com/machine-learning/fairness-overview/|url-status=live}}</ref><ref>{{Cite web|url=https://www.research.ibm.com/5-in-5/ai-and-bias/|title=AI and bias – IBM Research – US|website=www.research.ibm.com|access-date=2019-07-26|archive-date=2019-07-17|archive-url=https://web.archive.org/web/20190717175957/http://www.research.ibm.com/5-in-5/ai-and-bias/|url-status=live}}</ref> One solution for addressing bias is to create documentation for the data used to train AI systems.<ref>{{cite journal |last1=Bender |first1=Emily M. |last2=Friedman |first2=Batya |title=Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science |journal=Transactions of the Association for Computational Linguistics |date=December 2018 |volume=6 |pages=587–604 |doi=10.1162/tacl_a_00041 |doi-access=free }}</ref><ref>{{cite arXiv |last1=Gebru |first1=Timnit |last2=Morgenstern |first2=Jamie |last3=Vecchione |first3=Briana |last4=Vaughan |first4=Jennifer Wortman |last5=Wallach |first5=Hanna |author-link5=Hanna Wallach|last6=Daumé III |first6=Hal |last7=Crawford |first7=Kate |date=2018 |title=Datasheets for Datasets |class=cs.DB |eprint=1803.09010}}</ref>


===Accountability===
The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it. Some experts warn that algorithmic bias is already pervasive in many industries and that almost no one is making an effort to identify or correct it.<ref>{{Cite web|url=https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/|title=Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead|last=Knight|first=Will|website=MIT Technology Review|access-date=2019-07-26|archive-date=2019-07-04|archive-url=https://web.archive.org/web/20190704224752/https://www.technologyreview.com/s/608986/forget-killer-robotsbias-is-the-real-ai-danger/|url-status=live}}</ref> There are some open-sourced tools <ref>{{Cite web |url=https://map.ai-global.org/ |title=Archived copy |access-date=2020-10-28 |archive-date=2020-10-31 |archive-url=https://web.archive.org/web/20201031034143/https://map.ai-global.org/ |url-status=live }}</ref> by civil societies that are looking to bring more awareness to biased AI.
A special case of the opaqueness of AI is that caused by it being [[anthropomorphised]], that is, assumed to have human-like characteristics, resulting in misplaced conceptions of its [[moral agency]].{{Dubious|date=April 2024|reason=Unclear why AIs couldn't have moral agency. Also unclear whether attributing it moral agency is a special case of opaqueness, and whether that would prevent people from attributing the responsibility of incidents to the company that developed it.}} This can cause people to overlook whether either human [[negligence]] or deliberate criminal action has led to unethical outcomes produced through an AI system. Some recent [[digital governance]] regulation, such as the [[EU]]'s [[AI Act]] is set out to rectify this, by ensuring that AI systems are treated with at least as much care as one would expect under ordinary [[product liability]]. This includes potentially [[Information technology audit|AI audits]].


===Robot rights===
=== Regulation ===
{{Main articles|Regulation of artificial intelligence}}
According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that do not require a human controller.<ref>{{Cite web |last=Howard |first=Ayanna |date=29 July 2019 |title=The Regulation of AI – Should Organizations Be Worried? {{!}} Ayanna Howard |url=https://sloanreview.mit.edu/article/the-regulation-of-ai-should-organizations-be-worried/ |url-status=live |archive-url=https://web.archive.org/web/20190814134545/https://sloanreview.mit.edu/article/the-regulation-of-ai-should-organizations-be-worried/ |archive-date=2019-08-14 |access-date=2019-08-14 |website=MIT Sloan Management Review}}</ref> Similarly, according to a five-country study by KPMG and the [[University of Queensland]] Australia in 2021, 66-79% of citizens in each country believe that the impact of AI on society is uncertain and unpredictable; 96% of those surveyed expect AI governance challenges to be managed carefully.<ref>{{Cite web |date=March 2021 |title=Trust in artificial intelligence - A five country study |url=https://assets.kpmg.com/content/dam/kpmg/au/pdf/2021/trust-in-ai-multiple-countries.pdf |website=KPMG |access-date=2023-10-06 |archive-date=2023-10-01 |archive-url=https://web.archive.org/web/20231001161127/https://assets.kpmg.com/content/dam/kpmg/au/pdf/2021/trust-in-ai-multiple-countries.pdf |url-status=live }}</ref>
"Robot rights" is the concept that people should have moral obligations towards their machines, akin to [[human rights]] or [[animal rights]].<ref>{{cite journal | title = Posthuman Rights: Dimensions of Transhuman Worlds | journal = Teknokultura | volume = 12 | issue = 2 | last = Evans | first = Woody | author-link = Woody Evans | year = 2015 | doi = 10.5209/rev_TK.2015.v12.n2.49072 | doi-access = free }}</ref> It has been suggested that robot rights (such as a right to exist and perform its own mission) could be linked to robot duty to serve humanity, analogous to linking human rights with human duties before society.<ref>{{cite journal | url = http://cyberleninka.ru/article/n/artificial-personal-autonomy-and-concept-of-robot-rights | title = Artificial Personal Autonomy and Concept of Robot Rights | last = Sheliazhenko | first = Yurii | author-link = Yurii Sheliazhenko | journal = European Journal of Law and Political Sciences | year = 2017 | pages = 17–21 | doi = 10.20534/EJLPS-17-1-17-21 | access-date = 10 May 2017 | archive-date = 14 July 2018 | archive-url = https://web.archive.org/web/20180714111141/https://cyberleninka.ru/article/n/artificial-personal-autonomy-and-concept-of-robot-rights | url-status = live }}</ref> These could include the right to life and liberty, freedom of thought and expression, and equality before the law.<ref>
The American Heritage Dictionary of the English Language, Fourth Edition
</ref> The issue has been considered by the [[Institute for the Future]]<ref>{{cite news | url=http://news.bbc.co.uk/2/hi/technology/6200005.stm | title=Robots could demand legal rights | work=BBC News | date=December 21, 2006 | access-date=January 3, 2010 | archive-date=October 15, 2019 | archive-url=https://web.archive.org/web/20191015042628/http://news.bbc.co.uk/2/hi/technology/6200005.stm | url-status=live }}</ref> and by the [[Department of Trade and Industry (United Kingdom)|U.K. Department of Trade and Industry]].<ref name=TimesOnline>{{cite news | url=http://www.timesonline.co.uk/tol/news/uk/science/article1695546.ece | title=Human rights for robots? We're getting carried away | publisher=The Times of London | work=The Times Online | first=Mark | last=Henderson | date=April 24, 2007 | access-date=May 2, 2010 | archive-date=May 17, 2008 | archive-url=https://web.archive.org/web/20080517022444/http://www.timesonline.co.uk/tol/news/uk/science/article1695546.ece | url-status=live }}</ref>
Experts disagree on how soon specific and detailed laws on the subject will be necessary.<ref name=TimesOnline/> Glenn McGee reported that sufficiently humanoid robots might appear by 2020,<ref>{{cite web | url=https://www.the-scientist.com/column/a-robot-code-of-ethics-46522 | title=A Robot Code of Ethics | first=Glenn | last=McGee | publisher=The Scientist | access-date=2019-03-25 | archive-date=2020-09-06 | archive-url=https://web.archive.org/web/20200906124127/https://www.the-scientist.com/column/a-robot-code-of-ethics-46522 | url-status=live }}</ref> while [[Ray Kurzweil]] sets the date at 2029.<ref>
{{Cite book | last=Kurzweil | first=Ray | author-link=Ray Kurzweil | year=2005 | title=The Singularity is Near | publisher=Penguin Books | isbn=978-0-670-03384-3 | title-link=The Singularity is Near }}


Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term.<ref name="DeloitteGDPR">{{cite web |last1=Bastin |first1=Roland |last2=Wantz |first2=Georges |date=June 2017 |title=The General Data Protection Regulation Cross-industry innovation |url=https://www2.deloitte.com/content/dam/Deloitte/lu/Documents/technology/lu-general-data-protection-regulation-cross-industry-innovation-062017.pdf |url-status=live |archive-url=https://web.archive.org/web/20190110183405/https://www2.deloitte.com/content/dam/Deloitte/lu/Documents/technology/lu-general-data-protection-regulation-cross-industry-innovation-062017.pdf |archive-date=2019-01-10 |access-date=2019-01-10 |website=Inside magazine |publisher=Deloitte |ref=DeloitteGDPR}}</ref> The [[OECD]], [[UN]], [[EU]], and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.<ref>{{Cite web |date=2017-06-07 |title=UN artificial intelligence summit aims to tackle poverty, humanity's 'grand challenges' |url=https://news.un.org/en/story/2017/06/558962-un-artificial-intelligence-summit-aims-tackle-poverty-humanitys-grand |url-status=live |archive-url=https://web.archive.org/web/20190726084819/https://news.un.org/en/story/2017/06/558962-un-artificial-intelligence-summit-aims-tackle-poverty-humanitys-grand |archive-date=2019-07-26 |access-date=2019-07-26 |website=UN News}}</ref><ref>{{Cite web |title=Artificial intelligence – Organisation for Economic Co-operation and Development |url=http://www.oecd.org/going-digital/ai/ |url-status=live |archive-url=https://web.archive.org/web/20190722124751/http://www.oecd.org/going-digital/ai/ |archive-date=2019-07-22 |access-date=2019-07-26 |website=www.oecd.org}}</ref><ref>{{Cite web |last=Anonymous |date=2018-06-14 |title=The European AI Alliance |url=https://ec.europa.eu/digital-single-market/en/european-ai-alliance |url-status=live |archive-url=https://web.archive.org/web/20190801011543/https://ec.europa.eu/digital-single-market/en/european-ai-alliance |archive-date=2019-08-01 |access-date=2019-07-26 |website=Digital Single Market – European Commission}}</ref>
</ref> Another group of scientists meeting in 2007 supposed that at least 50 years had to pass before any sufficiently advanced system would exist.<ref>[https://web.archive.org/web/20080522163926/http://www.independent.co.uk/news/science/the-big-question-should-the-human-race-be-worried-by-the-rise-of-robots-446107.html The Big Question: Should the human race be worried by the rise of robots?], Independent Newspaper,


On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its "Policy and investment recommendations for trustworthy Artificial Intelligence".<ref>{{Cite web |last=European Commission High-Level Expert Group on AI |date=2019-06-26 |title=Policy and investment recommendations for trustworthy Artificial Intelligence |url=https://ec.europa.eu/digital-single-market/en/news/policy-and-investment-recommendations-trustworthy-artificial-intelligence |url-status=live |archive-url=https://web.archive.org/web/20200226023934/https://ec.europa.eu/digital-single-market/en/news/policy-and-investment-recommendations-trustworthy-artificial-intelligence |archive-date=2020-02-26 |access-date=2020-03-16 |website=Shaping Europe’s digital future – European Commission |language=en}}</ref> This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector.<ref>{{Cite journal |last1=Fukuda-Parr |first1=Sakiko |last2=Gibbons |first2=Elizabeth |date=July 2021 |title=Emerging Consensus on 'Ethical AI': Human Rights Critique of Stakeholder Guidelines |journal=Global Policy |language=en |volume=12 |issue=S6 |pages=32–44 |doi=10.1111/1758-5899.12965 |issn=1758-5880 |doi-access=free}}</ref> The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally.<ref>{{Cite web |date=2 August 2019 |title=EU Tech Policy Brief: July 2019 Recap |url=https://cdt.org/blog/eu-tech-policy-brief-july-2019-recap/ |url-status=live |archive-url=https://web.archive.org/web/20190809194057/https://cdt.org/blog/eu-tech-policy-brief-july-2019-recap/ |archive-date=2019-08-09 |access-date=2019-08-09 |website=Center for Democracy & Technology}}</ref> To prevent harm, in addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks.<ref>{{Cite journal |last1=Curtis |first1=Caitlin |last2=Gillespie |first2=Nicole |last3=Lockey |first3=Steven |date=2022-05-24 |title=AI-deploying organizations are key to addressing 'perfect storm' of AI risks |url=https://doi.org/10.1007/s43681-022-00163-7 |url-status=live |journal=AI and Ethics |language=en |volume=3 |issue=1 |pages=145–153 |doi=10.1007/s43681-022-00163-7 |issn=2730-5961 |pmc=9127285 |pmid=35634256 |archive-url=https://web.archive.org/web/20230315194711/https://link.springer.com/article/10.1007/s43681-022-00163-7 |archive-date=2023-03-15 |access-date=2022-05-29}}</ref> On 21 April 2021, the European Commission proposed the [[Artificial Intelligence Act]].<ref name="Financial Times-2021">{{Cite news |date=2021-10-18 |title=Why the world needs a Bill of Rights on AI |url=https://www.ft.com/content/17ca620c-4d76-4a2f-829a-27d8552ce719 |access-date=2023-03-19 |work=Financial Times}}</ref>
</ref>
The rules for the 2003 [[Loebner Prize]] competition envisioned the possibility of robots having rights of their own:


== Emergent or potential future challenges ==
<blockquote>61. If in any given year, a publicly available open-source Entry entered by the University of Surrey or the Cambridge Center wins the Silver Medal or the Gold Medal, then the Medal and the Cash Award will be awarded to the body responsible for the development of that Entry. If no such body can be identified, or if there is disagreement among two or more claimants, the Medal and the Cash Award will be held in trust until such time as the Entry may legally possess, either in the United States of America or in the venue of the contest, the Cash Award and Gold Medal in its own right.<ref>[http://loebner03.hamill.co.uk/docs/LPC%20Official%20Rules%20v2.0.pdf Loebner Prize Contest Official Rules — Version 2.0] {{Webarchive|url=https://web.archive.org/web/20160303180753/http://loebner03.hamill.co.uk/docs/LPC%20Official%20Rules%20v2.0.pdf |date=2016-03-03 }} The competition was directed by [[David Hamill]] and the rules were developed by members of the Robitron Yahoo group.</ref> </blockquote>

=== Increasing use ===
In October 2017, the android [[Sophia (robot)|Sophia]] was granted "honorary" citizenship in [[Saudi Arabia]], though some considered this to be more of a publicity stunt than a meaningful legal recognition.<ref>{{Cite web |url=https://techcrunch.com/2017/10/26/saudi-arabia-robot-citizen-sophia/ |title=Saudi Arabia bestows citizenship on a robot named Sophia |access-date=2017-10-27 |archive-date=2017-10-27 |archive-url=https://web.archive.org/web/20171027023101/https://techcrunch.com/2017/10/26/saudi-arabia-robot-citizen-sophia/ |url-status=live }}</ref> Some saw this gesture as openly denigrating of [[human rights]] and the [[rule of law]].<ref name="bs">{{cite web |last1=Vincent |first1=James |title=Pretending to give a robot citizenship helps no one |url=https://www.theverge.com/2017/10/30/16552006/robot-rights-citizenship-saudi-arabia-sophia |website=The Verge |date=30 October 2017 |ref=bs |access-date=10 January 2019 |archive-date=3 August 2019 |archive-url=https://web.archive.org/web/20190803144659/https://www.theverge.com/2017/10/30/16552006/robot-rights-citizenship-saudi-arabia-sophia |url-status=live }}</ref>
AI has been slowly making its presence more known throughout the world, from chat bots that seemingly have answers for every homework question to [[Generative artificial intelligence]] that can create a painting about whatever one desires. AI has become increasingly popular in hiring markets, from the ads that target certain people according to what they are looking for to the inspection of applications of potential hires. Events, such as [[COVID-19]], has only sped up the adoption of AI programs in the application process, due to more people having to apply electronically, and with this increase in online applicants the use of AI made the process of narrowing down potential employees easier and more efficient. AI has become more prominent as businesses have to keep up with the times and ever-expanding internet. Processing analytics and making decisions becomes much easier with the help of AI.<ref name="Spindler-20232" /> As [[Tensor Processing Unit]] (TPUs) and [[Graphics processing unit]] (GPUs) become more powerful, AI capabilities also increase, forcing companies to use it to keep up with the competition. Managing customers' needs and automating many parts of the workplace leads to companies having to spend less money on employees.

AI has also seen increased usage in criminal justice and healthcare. For medicinal means, AI is being used more often to analyze patient data to make predictions about future patients' conditions and possible treatments. These programs are called [[Clinical decision support system]] (DSS). AI's future in healthcare may develop into something further than just recommended treatments, such as referring certain patients over others, leading to the possibility of inequalities.<ref>{{Cite journal |last1=Challen |first1=Robert |last2=Denny |first2=Joshua |last3=Pitt |first3=Martin |last4=Gompels |first4=Luke |last5=Edwards |first5=Tom |last6=Tsaneva-Atanasova |first6=Krasimira |date=March 2019 |title=Artificial intelligence, bias and clinical safety |journal=BMJ Quality & Safety |language=en |volume=28 |issue=3 |pages=231–237 |doi=10.1136/bmjqs-2018-008370 |issn=2044-5415|doi-access=free |pmid=30636200 |pmc=6560460 }}</ref>

=== Robot rights ===
[[File:Hospital delivery robot having priority to elevators.jpg|thumb|A hospital [[delivery robot]] in front of elevator doors stating "Robot Has Priority", a situation that may be regarded as [[reverse discrimination]] in relation to humans]]
"Robot rights" is the concept that people should have moral obligations towards their machines, akin to [[human rights]] or [[animal rights]].<ref>{{cite journal | title = Posthuman Rights: Dimensions of Transhuman Worlds | journal = Teknokultura | volume = 12 | issue = 2 | last = Evans | first = Woody | author-link = Woody Evans | year = 2015 | doi = 10.5209/rev_TK.2015.v12.n2.49072 | doi-access = free }}</ref> It has been suggested that robot rights (such as a right to exist and perform its own mission) could be linked to robot duty to serve humanity, analogous to linking human rights with human duties before society.<ref>{{cite journal | url = http://cyberleninka.ru/article/n/artificial-personal-autonomy-and-concept-of-robot-rights | title = Artificial Personal Autonomy and Concept of Robot Rights | last = Sheliazhenko | first = Yurii | author-link = Yurii Sheliazhenko | journal = European Journal of Law and Political Sciences | year = 2017 | pages = 17–21 | doi = 10.20534/EJLPS-17-1-17-21 | access-date = 10 May 2017 | archive-date = 14 July 2018 | archive-url = https://web.archive.org/web/20180714111141/https://cyberleninka.ru/article/n/artificial-personal-autonomy-and-concept-of-robot-rights | url-status = live }}</ref> A specific issue to consider is whether copyright ownership may be claimed.<ref>{{cite journal |last1=Doomen |first1=Jasper |title= The artificial intelligence entity as a legal person|journal=Information & Communications Technology Law |date=2023 |volume=32 |issue=3 |pages=277–278|doi=10.1080/13600834.2023.2196827 |doi-access=free |hdl=1820/c29a3daa-9e36-4640-85d3-d0ffdd18a62c |hdl-access=free }}</ref> The issue has been considered by the [[Institute for the Future]]<ref>{{cite news | url=http://news.bbc.co.uk/2/hi/technology/6200005.stm | title=Robots could demand legal rights | work=BBC News | date=December 21, 2006 | access-date=January 3, 2010 | archive-date=October 15, 2019 | archive-url=https://web.archive.org/web/20191015042628/http://news.bbc.co.uk/2/hi/technology/6200005.stm | url-status=live }}</ref> and by the [[Department of Trade and Industry (United Kingdom)|U.K. Department of Trade and Industry]].<ref name=TimesOnline>{{cite news | url=http://www.timesonline.co.uk/tol/news/uk/science/article1695546.ece | title=Human rights for robots? We're getting carried away | publisher=The Times of London | work=The Times Online | first=Mark | last=Henderson | date=April 24, 2007 | access-date=May 2, 2010 | archive-date=May 17, 2008 | archive-url=https://web.archive.org/web/20080517022444/http://www.timesonline.co.uk/tol/news/uk/science/article1695546.ece | url-status=dead }}</ref>

In October 2017, the android [[Sophia (robot)|Sophia]] was granted citizenship in [[Saudi Arabia]], though some considered this to be more of a publicity stunt than a meaningful legal recognition.<ref>{{Cite web |url=https://techcrunch.com/2017/10/26/saudi-arabia-robot-citizen-sophia/ |title=Saudi Arabia bestows citizenship on a robot named Sophia |date=26 October 2017 |access-date=2017-10-27 |archive-date=2017-10-27 |archive-url=https://web.archive.org/web/20171027023101/https://techcrunch.com/2017/10/26/saudi-arabia-robot-citizen-sophia/ |url-status=live }}</ref> Some saw this gesture as openly denigrating of [[human rights]] and the [[rule of law]].<ref>{{cite web |last1=Vincent |first1=James |title=Pretending to give a robot citizenship helps no one |url=https://www.theverge.com/2017/10/30/16552006/robot-rights-citizenship-saudi-arabia-sophia |website=The Verge |date=30 October 2017 |ref=bs |access-date=10 January 2019 |archive-date=3 August 2019 |archive-url=https://web.archive.org/web/20190803144659/https://www.theverge.com/2017/10/30/16552006/robot-rights-citizenship-saudi-arabia-sophia |url-status=live }}</ref>
The philosophy of [[Sentientism]] grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of being [[Sentience|sentient]], this philosophy holds that they should be shown compassion and granted rights.
The philosophy of [[sentientism]] grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of being [[Sentience|sentient]], this philosophy holds that they should be shown compassion and granted rights.
[[Joanna Bryson]] has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.<ref>{{Cite book|title=Close engagements with artificial companions: key social, psychological, ethical and design issues|date=2010|publisher=John Benjamins Pub. Co|editor=Wilks, Yorick|isbn=978-9027249944|location=Amsterdam|oclc=642206106}}</ref>
[[Joanna Bryson]] has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.<ref>{{Cite book|title=Close engagements with artificial companions: key social, psychological, ethical and design issues|date=2010|publisher=John Benjamins Pub. Co|editor=Wilks, Yorick|isbn=978-90-272-4994-4|location=Amsterdam|oclc=642206106}}</ref> Pressure groups to recognise 'robot rights' significantly hinder the establishment of robust international safety regulations.{{cn|date=June 2024}}

===AI welfare===
In 2020, professor Shimon Edelman noted that only a small portion of work in the rapidly growing field of AI ethics addressed the possibility of AIs experiencing suffering. This was despite credible theories having outlined possible ways by which AI systems may become conscious, such as the [[global workspace theory]] or the [[integrated information theory]]. Edelman notes one exception had been [[Thomas Metzinger]], who in 2018 called for a global moratorium on further work that risked creating conscious AIs. The moratorium was to run to 2050 and could be either extended or repealed early, depending on progress in better understanding the risks and how to mitigate them. Metzinger repeated this argument in 2021, highlighting the risk of creating an "[[Suffering risks|explosion of artificial suffering]]", both as an AI might suffer in intense ways that humans could not understand, and as replication processes may see the creation of huge quantities of conscious instances.

Several labs have openly stated they are trying to create conscious AIs. There have been reports from those with close access to AIs not openly intended to be self aware, that consciousness may already have unintentionally emerged.<ref>{{Cite journal |last=Macrae |first=Carl |date=September 2022 |title=Learning from the Failure of Autonomous and Intelligent Systems: Accidents, Safety, and Sociotechnical Sources of Risk |url=https://onlinelibrary.wiley.com/doi/10.1111/risa.13850 |journal=Risk Analysis |language=en |volume=42 |issue=9 |pages=1999–2025 |doi=10.1111/risa.13850 |pmid=34814229 |bibcode=2022RiskA..42.1999M |issn=0272-4332}}</ref> These include [[OpenAI]] founder [[Ilya Sutskever]] in February 2022, when he wrote that today's large neural nets may be "slightly conscious". In November 2022, [[David Chalmers]] argued that it was unlikely current large language models like [[GPT-3]] had experienced consciousness, but also that he considered there to be a serious possibility that large language models may become conscious in the future.<ref>{{cite journal | vauthors = Agarwal A, Edelman S | title = Functionally effective conscious AI without suffering | journal = [[Journal of Artificial Intelligence and Consciousness]] | volume = 7| pages = 39–50 | date =2020 | pmid = | doi = 10.1142/S2705078520300030| arxiv = 2002.05652 | s2cid = 211096533 }}</ref><ref>{{cite journal |author=[[Thomas Metzinger]] |date=February 2021 |title=Artificial Suffering: An Argument for a Global Moratorim on Synthetic Phenomenology |journal=Journal of Artificial Intelligence and Consciousness |volume=8 |pages=43–66 |doi=10.1142/S270507852150003X |pmid= |s2cid=233176465 |doi-access=free}}</ref><ref>{{cite arXiv |last=Chalmers |first=David |author-link=David Chalmers |eprint=2303.07103v1 |title=Could a Large Language Model be Conscious? |class=Computer Science |date=March 2023 }}</ref> In the [[ethics of uncertain sentience]], the [[precautionary principle]] is often invoked.<ref>{{Cite journal |last=Birch |first=Jonathan |author-link=Jonathan Birch (philosopher) |date=2017-01-01 |title=Animal sentience and the precautionary principle |url=https://www.wellbeingintlstudiesrepository.org/animsent/vol2/iss16/1 |journal=Animal Sentience |volume=2 |issue=16 |doi=10.51291/2377-7478.1200 |issn=2377-7478 |access-date=2024-07-08 |archive-date=2024-08-11 |archive-url=https://web.archive.org/web/20240811145748/https://www.wellbeingintlstudiesrepository.org/animsent/vol2/iss16/1/ |url-status=live }}</ref>

According to Carl Shulman and [[Nick Bostrom]], it may be possible to create machines that would be "superhumanly efficient at deriving well-being from resources", called "super-beneficiaries". One reason for this is that digital hardware could enable much faster information processing than biological brains, leading to a faster rate of [[subjective experience]]. These machines could also be engineered to feel intense and positive subjective experience, unaffected by the [[hedonic treadmill]]. Shulman and Bostrom caution that failing to appropriately consider the moral claims of digital minds could lead to a moral catastrophe, while uncritically prioritizing them over human interests could be detrimental to humanity.<ref>{{Cite journal |last1=Shulman |first1=Carl |last2=Bostrom |first2=Nick |date=August 2021 |title=Sharing the World with Digital Minds |url=https://nickbostrom.com/papers/digital-minds.pdf |journal=Rethinking Moral Status}}</ref><ref>{{cite news |last1=Fisher |first1=Richard |date=13 November 2020 |title=The intelligent monster that you should let eat you |url=https://www.bbc.com/future/article/20201111-philosophy-of-utility-monsters-and-artificial-intelligence |access-date=12 February 2021 |work= |publisher=BBC News |language=en}}</ref>


===Threat to human dignity===
===Threat to human dignity===
Line 95: Line 116:
* A therapist (as was proposed by [[Kenneth Colby]] in the 70s)
* A therapist (as was proposed by [[Kenneth Colby]] in the 70s)


Weizenbaum explains that we require authentic feelings of [[empathy]] from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."<ref name="MWZ">[[Joseph Weizenbaum]], quoted in {{Harvnb|McCorduck|2004|pp=356, 374–376}}}</ref>
Weizenbaum explains that we require authentic feelings of [[empathy]] from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."<ref name="MWZ">[[Joseph Weizenbaum]], quoted in {{Harvnb|McCorduck|2004|pp=356, 374–376}}</ref>


[[Pamela McCorduck]] counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer," pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.<ref name=MWZ/> However, [[Andreas Kaplan|Kaplan]] and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines; Using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and engrained, which makes them even more difficult to spot and fight against.<ref>{{cite journal|last1=Kaplan|first1=Andreas|last2=Haenlein|first2=Michael|date=January 2019|title=Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence|journal=Business Horizons|volume=62|issue=1|pages=15–25|doi=10.1016/j.bushor.2018.08.004|s2cid=158433736}}</ref>
[[Pamela McCorduck]] counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer", pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.<ref name=MWZ/> However, [[Andreas Kaplan|Kaplan]] and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines; using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and ingrained, which makes them even more difficult to spot and fight against.<ref>{{cite journal|last1=Kaplan|first1=Andreas|last2=Haenlein|first2=Michael|date=January 2019|title=Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence|journal=Business Horizons|volume=62|issue=1|pages=15–25|doi=10.1016/j.bushor.2018.08.004|s2cid=158433736}}</ref>


Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as [[computationalism]]). To Weizenbaum, these points suggest that AI research devalues human life.<ref name="Weizenbaum's critique">
Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as [[computationalism]]). To Weizenbaum, these points suggest that AI research devalues human life.<ref name="Weizenbaum's critique">
Line 110: Line 131:
</ref>
</ref>


AI founder [[John McCarthy (computer scientist)|John McCarthy]] objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes. [[Bill Hibbard]]<ref name="hibbard 2014">{{cite arxiv|eprint=1411.1373|class=cs.AI|first1=Bill|last1=Hibbard|title=Ethical Artificial Intelligence|date=17 November 2015}}</ref> writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."
AI founder [[John McCarthy (computer scientist)|John McCarthy]] objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes. [[Bill Hibbard]]<ref name="hibbard 2014">{{cite arXiv|eprint=1411.1373|class=cs.AI|first1=Bill|last1=Hibbard|title=Ethical Artificial Intelligence|date=17 November 2015}}</ref> writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."


=== Liability for self-driving cars ===
=== Liability for self-driving cars ===
{{main|Self-driving car liability}}
{{main|Self-driving car liability}}


As the widespread use of [[Self-driving car|autonomous cars]] becomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed.<ref>{{cite news |last1=Davies |first1=Alex |title=Google's Self-Driving Car Caused Its First Crash |url=https://www.wired.com/2016/02/googles-self-driving-car-may-caused-first-crash/ |magazine=Wired |date=29 February 2016 |access-date=26 July 2019 |archive-date=7 July 2019 |archive-url=https://web.archive.org/web/20190707212719/https://www.wired.com/2016/02/googles-self-driving-car-may-caused-first-crash/ |url-status=live }}</ref><ref>{{cite news |last1=Levin |first1=Sam |last2=Wong |first2=Julia Carrie |title=Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian |url=https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe |work=The Guardian |date=19 March 2018 |access-date=26 July 2019 |archive-date=26 July 2019 |archive-url=https://web.archive.org/web/20190726084818/https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe |url-status=live }}</ref> Recently,{{When|date=November 2020}} there has been debate as to the legal liability of the responsible party if these cars get into accidents.<ref>{{Cite web|url=https://futurism.com/who-responsible-when-self-driving-car-accident|title=Who is responsible when a self-driving car has an accident?|website=Futurism|access-date=2019-07-26|archive-date=2019-07-26|archive-url=https://web.archive.org/web/20190726084819/https://futurism.com/who-responsible-when-self-driving-car-accident|url-status=live}}</ref><ref>{{Cite web|url=https://knowledge.wharton.upenn.edu/article/automated-car-accidents/|title=Autonomous Car Crashes: Who – or What – Is to Blame?|last1=Radio|first1=Business|last2=Policy|first2=Law and Public|website=Knowledge@Wharton|access-date=2019-07-26|last3=Podcasts|last4=America|first4=North|archive-date=2019-07-26|archive-url=https://web.archive.org/web/20190726084820/https://knowledge.wharton.upenn.edu/article/automated-car-accidents/|url-status=live}}</ref> In one report where a driverless car hit a pedestrian, the driver was inside the car but the controls were fully in the hand of computers. This led to a dilemma over who was at fault for the accident.<ref>{{Cite web|url=https://www.thebalance.com/driverless-car-accidents-4171792|title=Driverless Cars Gone Wild|last=Delbridge|first=Emily|website=The Balance|access-date=2019-05-29|archive-date=2019-05-29|archive-url=https://web.archive.org/web/20190529020717/https://www.thebalance.com/driverless-car-accidents-4171792|url-status=live}}</ref>
As the widespread use of [[Self-driving car|autonomous cars]] becomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed.<ref>{{cite news |last1=Davies |first1=Alex |date=29 February 2016 |title=Google's Self-Driving Car Caused Its First Crash |url=https://www.wired.com/2016/02/googles-self-driving-car-may-caused-first-crash/ |url-status=live |archive-url=https://web.archive.org/web/20190707212719/https://www.wired.com/2016/02/googles-self-driving-car-may-caused-first-crash/ |archive-date=7 July 2019 |access-date=26 July 2019 |magazine=Wired}}</ref><ref>{{cite news |last1=Levin |first1=Sam |author-link=Julia Carrie Wong |last2=Wong |first2=Julia Carrie |date=19 March 2018 |title=Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian |url=https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe |url-status=live |archive-url=https://web.archive.org/web/20190726084818/https://www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe |archive-date=26 July 2019 |access-date=26 July 2019 |work=The Guardian}}</ref> There have been debates about the legal liability of the responsible party if these cars get into accidents.<ref>{{Cite web |date=30 January 2018 |title=Who is responsible when a self-driving car has an accident? |url=https://futurism.com/who-responsible-when-self-driving-car-accident |url-status=live |archive-url=https://web.archive.org/web/20190726084819/https://futurism.com/who-responsible-when-self-driving-car-accident |archive-date=2019-07-26 |access-date=2019-07-26 |website=Futurism}}</ref><ref>{{Cite news |title=Autonomous Car Crashes: Who – or What – Is to Blame? |url=https://knowledge.wharton.upenn.edu/article/automated-car-accidents/ |url-status=live |archive-url=https://web.archive.org/web/20190726084820/https://knowledge.wharton.upenn.edu/article/automated-car-accidents/ |archive-date=2019-07-26 |access-date=2019-07-26 |website=Knowledge@Wharton |publisher=Radio Business North America Podcasts |series=Law and Public Policy}}</ref> In one report where a driverless car hit a pedestrian, the driver was inside the car but the controls were fully in the hand of computers. This led to a dilemma over who was at fault for the accident.<ref>{{Cite web |last=Delbridge |first=Emily |title=Driverless Cars Gone Wild |url=https://www.thebalance.com/driverless-car-accidents-4171792 |url-status=live |archive-url=https://web.archive.org/web/20190529020717/https://www.thebalance.com/driverless-car-accidents-4171792 |archive-date=2019-05-29 |access-date=2019-05-29 |website=The Balance}}</ref>


In another incident on March 18, 2018, [[Elaine Herzberg]] was struck and killed by a self-driving [[Uber]] in Arizona. In this case, the automated car was capable of detecting cars and certain obstacles in order to autonomously navigate the roadway, but it could not anticipate a pedestrian in the middle of the road. This raised the question of whether the driver, pedestrian, the car company, or the government should be held responsible for her death.<ref>{{Citation|last=Stilgoe|first=Jack|title=Who Killed Elaine Herzberg?|date=2020|url=http://link.springer.com/10.1007/978-3-030-32320-2_1|work=Who’s Driving Innovation?|pages=1–6|place=Cham|publisher=Springer International Publishing|language=en|doi=10.1007/978-3-030-32320-2_1|isbn=978-3-030-32319-6|s2cid=214359377|access-date=2020-11-11|archive-date=2021-03-18|archive-url=https://web.archive.org/web/20210318060722/https://link.springer.com/chapter/10.1007%2F978-3-030-32320-2_1|url-status=live}}</ref>
In another incident on March 18, 2018, [[Elaine Herzberg]] was struck and killed by a self-driving [[Uber]] in Arizona. In this case, the automated car was capable of detecting cars and certain obstacles in order to autonomously navigate the roadway, but it could not anticipate a pedestrian in the middle of the road. This raised the question of whether the driver, pedestrian, the car company, or the government should be held responsible for her death.<ref>{{Citation |last=Stilgoe |first=Jack |title=Who Killed Elaine Herzberg? |date=2020 |work=Who’s Driving Innovation? |pages=1–6 |url=http://link.springer.com/10.1007/978-3-030-32320-2_1 |access-date=2020-11-11 |archive-url=https://web.archive.org/web/20210318060722/https://link.springer.com/chapter/10.1007%2F978-3-030-32320-2_1 |archive-date=2021-03-18 |url-status=live |place=Cham |publisher=Springer International Publishing |language=en |doi=10.1007/978-3-030-32320-2_1 |isbn=978-3-030-32319-6 |s2cid=214359377}}</ref>


Currently, self-driving cars are considered semi-autonomous, requiring the driver to pay attention and be prepared to take control if necessary.<ref>{{cite journal|last1=Maxmen|first1=Amy|date=October 2018|title=Self-driving car dilemmas reveal that moral choices are not universal|journal=Nature|volume=562|issue=7728|pages=469–470|bibcode=2018Natur.562..469M|doi=10.1038/d41586-018-07135-0|pmid=30356197|doi-access=free}}</ref>{{Failed verification|date=November 2020}} Thus, it falls on governments to regulate the driver who over-relies on autonomous features. as well educate them that these are just technologies that, while convenient, are not a complete substitute. Before autonomous cars become widely used, these issues need to be tackled through new policies.<ref>{{Cite web|url=https://www.gov.uk/government/publications/driverless-cars-in-the-uk-a-regulatory-review|title=Regulations for driverless cars|website=GOV.UK|access-date=2019-07-26|archive-date=2019-07-26|archive-url=https://web.archive.org/web/20190726084816/https://www.gov.uk/government/publications/driverless-cars-in-the-uk-a-regulatory-review|url-status=live}}</ref><ref>{{Cite web|url=https://cyberlaw.stanford.edu/wiki/index.php/Automated_Driving:_Legislative_and_Regulatory_Action|title=Automated Driving: Legislative and Regulatory Action – CyberWiki|website=cyberlaw.stanford.edu|access-date=2019-07-26|archive-date=2019-07-26|archive-url=https://web.archive.org/web/20190726084828/https://cyberlaw.stanford.edu/wiki/index.php/Automated_Driving:_Legislative_and_Regulatory_Action|url-status=dead}}</ref><ref>{{Cite web|url=http://www.ncsl.org/research/transportation/autonomous-vehicles-self-driving-vehicles-enacted-legislation.aspx|title=Autonomous Vehicles {{!}} Self-Driving Vehicles Enacted Legislation|website=www.ncsl.org|access-date=2019-07-26|archive-date=2019-07-26|archive-url=https://web.archive.org/web/20190726165225/http://www.ncsl.org/research/transportation/autonomous-vehicles-self-driving-vehicles-enacted-legislation.aspx|url-status=live}}</ref>
Currently, self-driving cars are considered semi-autonomous, requiring the driver to pay attention and be prepared to take control if necessary.<ref>{{cite journal |last1=Maxmen |first1=Amy |date=October 2018 |title=Self-driving car dilemmas reveal that moral choices are not universal |journal=Nature |volume=562 |issue=7728 |pages=469–470 |bibcode=2018Natur.562..469M |doi=10.1038/d41586-018-07135-0 |pmid=30356197 |doi-access=free}}</ref>{{Failed verification|date=November 2020}} Thus, it falls on governments to regulate the driver who over-relies on autonomous features. as well educate them that these are just technologies that, while convenient, are not a complete substitute. Before autonomous cars become widely used, these issues need to be tackled through new policies.<ref>{{Cite web |title=Regulations for driverless cars |url=https://www.gov.uk/government/publications/driverless-cars-in-the-uk-a-regulatory-review |url-status=live |archive-url=https://web.archive.org/web/20190726084816/https://www.gov.uk/government/publications/driverless-cars-in-the-uk-a-regulatory-review |archive-date=2019-07-26 |access-date=2019-07-26 |website=GOV.UK}}</ref><ref>{{Cite web |title=Automated Driving: Legislative and Regulatory Action – CyberWiki |url=https://cyberlaw.stanford.edu/wiki/index.php/Automated_Driving:_Legislative_and_Regulatory_Action |url-status=dead |archive-url=https://web.archive.org/web/20190726084828/https://cyberlaw.stanford.edu/wiki/index.php/Automated_Driving:_Legislative_and_Regulatory_Action |archive-date=2019-07-26 |access-date=2019-07-26 |website=cyberlaw.stanford.edu}}</ref><ref>{{Cite web |title=Autonomous Vehicles {{!}} Self-Driving Vehicles Enacted Legislation |url=http://www.ncsl.org/research/transportation/autonomous-vehicles-self-driving-vehicles-enacted-legislation.aspx |url-status=live |archive-url=https://web.archive.org/web/20190726165225/http://www.ncsl.org/research/transportation/autonomous-vehicles-self-driving-vehicles-enacted-legislation.aspx |archive-date=2019-07-26 |access-date=2019-07-26 |website=www.ncsl.org}}</ref>


Experts contend that autonomous vehicles ought to be able to distinguish between rightful and harmful decisions since they have the potential of inflicting harm.<ref>{{Cite journal |last1=Etzioni |first1=Amitai |last2=Etzioni |first2=Oren |date=2017-12-01 |title=Incorporating Ethics into Artificial Intelligence |url=https://doi.org/10.1007/s10892-017-9252-2 |journal=The Journal of Ethics |language=en |volume=21 |issue=4 |pages=403–418 |doi=10.1007/s10892-017-9252-2 |issn=1572-8609 |s2cid=254644745}}</ref> The two main approaches proposed to enable smart machines to render moral decisions are the bottom-up approach, which suggests that machines should learn ethical decisions by observing human behavior without the need for formal rules or moral philosophies, and the top-down approach, which involves programming specific ethical principles into the machine's guidance system. However, there are significant challenges facing both strategies: the top-down technique is criticized for its difficulty in preserving certain moral convictions, while the bottom-up strategy is questioned for potentially unethical learning from human activities.
===Weaponization of artificial intelligence===

=== Weaponization ===
{{Main|Lethal autonomous weapon}}
{{Main|Lethal autonomous weapon}}
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomy.<ref name="Call for debate on killer robots">[http://news.bbc.co.uk/2/hi/technology/8182003.stm Call for debate on killer robots] {{Webarchive|url=https://web.archive.org/web/20090807005005/http://news.bbc.co.uk/2/hi/technology/8182003.stm |date=2009-08-07 }}, By Jason Palmer, Science and technology reporter, BBC News, 8/3/09.</ref><ref>[https://www.wired.com/dangerroom/2009/08/robot-three-way-portends-autonomous-future/ Robot Three-Way Portends Autonomous Future] {{Webarchive|url=https://web.archive.org/web/20121107102140/http://www.wired.com/dangerroom/2009/08/robot-three-way-portends-autonomous-future/ |date=2012-11-07 }}, By David Axe wired.com, August 13, 2009.</ref> On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented.<ref>{{Cite book|last=United States. Defense Innovation Board|title=AI principles: recommendations on the ethical use of artificial intelligence by the Department of Defense|oclc=1126650738}}</ref> The US Navy has funded a report which indicates that as [[military robots]] become more complex, there should be greater attention to implications of their ability to make autonomous decisions.<ref>[http://www.dailytech.com/New%20Navyfunded%20Report%20Warns%20of%20War%20Robots%20Going%20Terminator/article14298.htm New Navy-funded Report Warns of War Robots Going "Terminator"] {{Webarchive|url=https://web.archive.org/web/20090728101106/http://www.dailytech.com/New%20Navyfunded%20Report%20Warns%20of%20War%20Robots%20Going%20Terminator/article14298.htm |date=2009-07-28 }}, by Jason Mick (Blog), dailytech.com, February 17, 2009.</ref><ref name="engadget.com">[https://www.engadget.com/2009/02/18/navy-report-warns-of-robot-uprising-suggests-a-strong-moral-com/ Navy report warns of robot uprising, suggests a strong moral compass] {{Webarchive|url=https://web.archive.org/web/20110604145633/http://www.engadget.com/2009/02/18/navy-report-warns-of-robot-uprising-suggests-a-strong-moral-com/ |date=2011-06-04 }}, by Joseph L. Flatley engadget.com, Feb 18th 2009.</ref> Some researchers state that [[autonomous robot]]s might be more humane, as they could make decisions more effectively.<ref>{{Cite journal|last1=Umbrello|first1=Steven|last2=Torres|first2=Phil|last3=De Bellis|first3=Angelo F.|date=March 2020|title=The future of war: could lethal autonomous weapons make conflict more ethical?|url=http://link.springer.com/10.1007/s00146-019-00879-x|journal=AI & Society|language=en|volume=35|issue=1|pages=273–282|doi=10.1007/s00146-019-00879-x|hdl=2318/1699364|s2cid=59606353|issn=0951-5666|access-date=2020-11-11|archive-date=2021-01-05|archive-url=https://archive.today/20210105020836/https://link.springer.com/article/10.1007/s00146-019-00879-x|url-status=live}}</ref>
Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.<ref>[http://news.bbc.co.uk/2/hi/technology/8182003.stm Call for debate on killer robots] {{Webarchive|url=https://web.archive.org/web/20090807005005/http://news.bbc.co.uk/2/hi/technology/8182003.stm|date=2009-08-07}}, By Jason Palmer, Science and technology reporter, BBC News, 8/3/09.</ref> The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.<ref>[http://www.dailytech.com/New%20Navyfunded%20Report%20Warns%20of%20War%20Robots%20Going%20Terminator/article14298.htm Science New Navy-funded Report Warns of War Robots Going "Terminator"] {{Webarchive|url=https://web.archive.org/web/20090728101106/http://www.dailytech.com/New%20Navyfunded%20Report%20Warns%20of%20War%20Robots%20Going%20Terminator/article14298.htm|date=2009-07-28}}, by Jason Mick (Blog), dailytech.com, February 17, 2009.</ref><ref name="engadget.com" /> The President of the [[Association for the Advancement of Artificial Intelligence]] has commissioned a study to look at this issue.<ref>[http://research.microsoft.com/en-us/um/people/horvitz/AAAI_Presidential_Panel_2008-2009.htm AAAI Presidential Panel on Long-Term AI Futures 2008–2009 Study] {{Webarchive|url=https://web.archive.org/web/20090828214741/http://research.microsoft.com/en-us/um/people/horvitz/AAAI_Presidential_Panel_2008-2009.htm|date=2009-08-28}}, Association for the Advancement of Artificial Intelligence, Accessed 7/26/09.</ref> They point to programs like the Language Acquisition Device which can emulate human interaction.


On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the '[[black box]]' and understand the kill-chain process. However, a major concern is how the report will be implemented.<ref>{{Cite book|last=United States. Defense Innovation Board|title=AI principles: recommendations on the ethical use of artificial intelligence by the Department of Defense|oclc=1126650738}}</ref> The US Navy has funded a report which indicates that as [[military robots]] become more complex, there should be greater attention to implications of their ability to make autonomous decisions.<ref>[http://www.dailytech.com/New%20Navyfunded%20Report%20Warns%20of%20War%20Robots%20Going%20Terminator/article14298.htm New Navy-funded Report Warns of War Robots Going "Terminator"] {{Webarchive|url=https://web.archive.org/web/20090728101106/http://www.dailytech.com/New%20Navyfunded%20Report%20Warns%20of%20War%20Robots%20Going%20Terminator/article14298.htm |date=2009-07-28 }}, by Jason Mick (Blog), dailytech.com, February 17, 2009.</ref><ref name="engadget.com">[https://www.engadget.com/2009/02/18/navy-report-warns-of-robot-uprising-suggests-a-strong-moral-com/ Navy report warns of robot uprising, suggests a strong moral compass] {{Webarchive|url=https://web.archive.org/web/20110604145633/http://www.engadget.com/2009/02/18/navy-report-warns-of-robot-uprising-suggests-a-strong-moral-com/ |date=2011-06-04 }}, by Joseph L. Flatley engadget.com, Feb 18th 2009.</ref> Some researchers state that [[autonomous robot]]s might be more humane, as they could make decisions more effectively.<ref>{{Cite journal|last1=Umbrello|first1=Steven|last2=Torres|first2=Phil|last3=De Bellis|first3=Angelo F.|date=March 2020|title=The future of war: could lethal autonomous weapons make conflict more ethical?|url=http://link.springer.com/10.1007/s00146-019-00879-x|journal=AI & Society|language=en|volume=35|issue=1|pages=273–282|doi=10.1007/s00146-019-00879-x|hdl=2318/1699364|s2cid=59606353|issn=0951-5666|access-date=2020-11-11|archive-date=2021-01-05|archive-url=https://archive.today/20210105020836/https://link.springer.com/article/10.1007/s00146-019-00879-x|url-status=live}}</ref> In 2024, the [[DARPA|Defense Advanced Research Projects Agency]] funded a program, ''Autonomy Standards and Ideals with Military Operational Values'' (ASIMOV), to develop metrics for evaluating the ethical implications of autonomous weapon systems by testing communities.<ref>{{Cite web |last=Jamison |first=Miles |date=2024-12-20 |title=DARPA Launches Ethics Program for Autonomous Systems |url=https://executivegov.com/2024/12/darpa-launches-ethics-program-autonomous-systems/ |access-date=2025-01-02 |website=executivegov.com |language=en-US}}</ref><ref>{{Cite web |title=DARPA's ASIMOV seeks to develop Ethical Standards for Autonomous Systems |url=https://www.spacedaily.com/reports/CoVar_to_develop_Ethical_Standards_for_Autonomous_Systems_under_DARPA_ASIMOV_contract_999.html |access-date=2025-01-02 |website=Space Daily}}</ref>
Within this last decade, there has been intensive research in autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."<ref>{{cite journal |last1=Hellström |first1=Thomas |title=On the moral responsibility of military robots |journal=Ethics and Information Technology |date=June 2013 |volume=15 |issue=2 |pages=99–107 |id={{ProQuest|1372020233}} |s2cid=15205810 |doi=10.1007/s10676-012-9301-2 }}</ref> From a [[Consequentialism|consequentialist]] view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set [[Morality|moral]] framework that the AI cannot override.<ref>{{Cite web|url=https://qz.com/1244055/we-can-train-ai-to-identify-good-and-evil-and-then-use-it-to-teach-us-morality/|title=We can train AI to identify good and evil, and then use it to teach us morality|last=Mitra|first=Ambarish|website=Quartz|access-date=2019-07-26|archive-date=2019-07-26|archive-url=https://web.archive.org/web/20190726085248/https://qz.com/1244055/we-can-train-ai-to-identify-good-and-evil-and-then-use-it-to-teach-us-morality/|url-status=live}}</ref>


Research has studied how to make autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."<ref>{{cite journal |last1=Hellström |first1=Thomas |title=On the moral responsibility of military robots |journal=Ethics and Information Technology |date=June 2013 |volume=15 |issue=2 |pages=99–107 |id={{ProQuest|1372020233}} |s2cid=15205810 |doi=10.1007/s10676-012-9301-2 }}</ref> From a [[Consequentialism|consequentialist]] view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set [[Morality|moral]] framework that the AI cannot override.<ref>{{Cite web|url=https://qz.com/1244055/we-can-train-ai-to-identify-good-and-evil-and-then-use-it-to-teach-us-morality/|title=We can train AI to identify good and evil, and then use it to teach us morality|last=Mitra|first=Ambarish|website=Quartz|date=5 April 2018 |access-date=2019-07-26|archive-date=2019-07-26|archive-url=https://web.archive.org/web/20190726085248/https://qz.com/1244055/we-can-train-ai-to-identify-good-and-evil-and-then-use-it-to-teach-us-morality/|url-status=live}}</ref>
There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop [[Unmanned combat aerial vehicle|autonomous drone weapons]], paralleling similar announcements by Russia and Korea respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons, [[Stephen Hawking]] and [[Max Tegmark]] signed a "Future of Life" petition<ref>{{Cite web|url=https://futureoflife.org/ai-principles/|title=AI Principles|website=Future of Life Institute|access-date=2019-07-26|archive-date=2017-12-11|archive-url=https://web.archive.org/web/20171211171044/https://futureoflife.org/ai-principles/|url-status=live}}</ref> to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.<ref name="theatlantic.com">{{cite web|url=https://www.theatlantic.com/technology/archive/2015/08/humans-not-robots-are-the-real-reason-artificial-intelligence-is-scary/400994/|title=Why Artificial Intelligence Can Too Easily Be Weaponized – The Atlantic|author=Zach Musgrave and Bryan W. Roberts|work=The Atlantic|date=2015-08-14|access-date=2017-03-06|archive-date=2017-04-11|archive-url=https://web.archive.org/web/20170411140722/https://www.theatlantic.com/technology/archive/2015/08/humans-not-robots-are-the-real-reason-artificial-intelligence-is-scary/400994/|url-status=live}}</ref>


There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of a [[AI takeover|robot takeover of mankind]]. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop [[Unmanned combat aerial vehicle|autonomous drone weapons]], paralleling similar announcements by Russia and South Korea<ref>{{Cite news |title=South Korea developing new stealthy drones to support combat aircraft |last=Dominguez |first=Gabriel |date=23 August 2022 |work=[[The Japan Times]] |url=https://www.japantimes.co.jp/news/2022/08/23/asia-pacific/south-korea-stealth-drones-development/ |access-date=14 June 2023}}</ref> respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons, [[Stephen Hawking]] and [[Max Tegmark]] signed a "Future of Life" petition<ref>{{Cite web|url=https://futureoflife.org/ai-principles/|title=AI Principles|website=Future of Life Institute|date=11 August 2017 |access-date=2019-07-26|archive-date=2017-12-11|archive-url=https://web.archive.org/web/20171211171044/https://futureoflife.org/ai-principles/|url-status=live}}</ref> to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.<ref name="theatlantic.com">{{cite web|url=https://www.theatlantic.com/technology/archive/2015/08/humans-not-robots-are-the-real-reason-artificial-intelligence-is-scary/400994/|title=Why Artificial Intelligence Can Too Easily Be Weaponized – The Atlantic|author=Zach Musgrave and Bryan W. Roberts|work=The Atlantic|date=2015-08-14|access-date=2017-03-06|archive-date=2017-04-11|archive-url=https://web.archive.org/web/20170411140722/https://www.theatlantic.com/technology/archive/2015/08/humans-not-robots-are-the-real-reason-artificial-intelligence-is-scary/400994/|url-status=live}}</ref>
"If any major military power pushes ahead with the AI weapon development, a global [[arms race]] is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes [[Skype]] co-founder [[Jaan Tallinn]] and MIT professor of linguistics [[Noam Chomsky]] as additional supporters against AI weaponry.<ref>{{cite web|url=https://blogs.wsj.com/digits/2015/07/27/musk-hawking-warn-of-artificial-intelligence-weapons/|title=Musk, Hawking Warn of Artificial Intelligence Weapons|author=Cat Zakrzewski|work=WSJ|date=2015-07-27|access-date=2017-08-04|archive-date=2015-07-28|archive-url=https://web.archive.org/web/20150728173944/http://blogs.wsj.com/digits/2015/07/27/musk-hawking-warn-of-artificial-intelligence-weapons/|url-status=live}}</ref>


"If any major military power pushes ahead with the AI weapon development, a global [[arms race]] is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the [[AK-47|Kalashnikovs]] of tomorrow", says the petition, which includes [[Skype]] co-founder [[Jaan Tallinn]] and MIT professor of linguistics [[Noam Chomsky]] as additional supporters against AI weaponry.<ref>{{cite web|url=https://blogs.wsj.com/digits/2015/07/27/musk-hawking-warn-of-artificial-intelligence-weapons/|title=Musk, Hawking Warn of Artificial Intelligence Weapons|author=Cat Zakrzewski|work=WSJ|date=2015-07-27|access-date=2017-08-04|archive-date=2015-07-28|archive-url=https://web.archive.org/web/20150728173944/http://blogs.wsj.com/digits/2015/07/27/musk-hawking-warn-of-artificial-intelligence-weapons/|url-status=live}}</ref>
Physicist and Astronomer Royal [[Sir Martin Rees]] has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." [[Huw Price]], a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology." These two professors created the [[Centre for the Study of Existential Risk]] at Cambridge University in the hope of avoiding this threat to human existence.<ref name="theatlantic.com"/>


Physicist and Astronomer Royal [[Sir Martin Rees]] has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." [[Huw Price]], a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology". These two professors created the [[Centre for the Study of Existential Risk]] at Cambridge University in the hope of avoiding this threat to human existence.<ref name="theatlantic.com"/>
Regarding the potential for smarter-than-human systems to be employed militarily, the [[Open Philanthropy Project]] writes that these scenarios "seem potentially as important as the risks related to loss of control", but research investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the [[Machine Intelligence Research Institute]] (MIRI) and the [[Future of Humanity Institute]] (FHI), and there seems to have been less analysis and debate regarding them".<ref name="givewell">{{cite report |author=GiveWell |author-link=GiveWell |date=2015 |title=Potential risks from advanced artificial intelligence |url=http://www.givewell.org/labs/causes/ai-risk |access-date=11 October 2015 |archive-date=12 October 2015 |archive-url=https://web.archive.org/web/20151012084043/http://www.givewell.org/labs/causes/ai-risk |url-status=live }}</ref>


Regarding the potential for smarter-than-human systems to be employed militarily, the [[Open Philanthropy Project]] writes that these scenarios "seem potentially as important as the risks related to loss of control", but research investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the [[Machine Intelligence Research Institute]] (MIRI) and the [[Future of Humanity Institute]] (FHI), and there seems to have been less analysis and debate regarding them".<ref>{{Cite web |date=August 11, 2015 |title=Potential Risks from Advanced Artificial Intelligence |url=https://www.openphilanthropy.org/research/potential-risks-from-advanced-artificial-intelligence/ |access-date=2024-04-07 |website=Open Philanthropy |language=en-us}}</ref>
===Opaque algorithms===
Approaches like machine learning with [[neural network]]s can result in computers making decisions that they and the humans who programmed them cannot explain. It is difficult for people to determine if such decisions are fair and trustworthy, leading potentially to bias in AI systems going undetected, or people rejecting the use of such systems. This has led to advocacy and in some jurisdictions legal requirements for [[explainable artificial intelligence]].<ref>[https://think.kera.org/2017/12/05/inside-the-mind-of-a-i/ Inside The Mind Of A.I.] - Cliff Kuang interview</ref>


Academic Gao Qiqi writes that military use of AI risks escalating military competition between countries and that the impact of AI in military matters will not be limited to one country but will have spillover effects.<ref name=":023">{{Cite book |last1=Bachulska |first1=Alicja |url=https://ecfr.eu/publication/idea-of-china/ |title=The Idea of China: Chinese Thinkers on Power, Progress, and People |last2=Leonard |first2=Mark |last3=Oertel |first3=Janka |date=2 July 2024 |publisher=[[European Council on Foreign Relations]] |isbn=978-1-916682-42-9 |location=Berlin, Germany |pages= |format=EPUB |access-date=22 July 2024 |archive-url=https://web.archive.org/web/20240717120845/https://ecfr.eu/publication/idea-of-china/ |archive-date=17 July 2024 |url-status=live}}</ref>{{Rp|page=91}} Gao cites the example of U.S. military use of AI, which he contends has been used as a scapegoat to evade accountability for decision-making.<ref name=":023" />{{Rp|page=91}}
==Singularity==

A [[Summit on Responsible Artificial Intelligence in the Military Domain|summit]] was held in 2023 in the Hague on the issue of using AI responsibly in the military domain.<ref name="reg">{{Cite web |last=Brandon Vigliarolo |title=International military AI summit ends with 60-state pledge |url=https://www.theregister.com/2023/02/17/military_ai_summit/ |access-date=2023-02-17 |website=www.theregister.com |language=en}}</ref>

===Singularity===
{{Further|Existential risk from artificial general intelligence|Superintelligence|Technological singularity}}
{{Further|Existential risk from artificial general intelligence|Superintelligence|Technological singularity}}


[[Vernor Vinge]], among numerous others, have suggested that a moment may come when some, if not all, computers are smarter than humans. The onset of this event is commonly referred to as "[[Technological singularity|the Singularity]]"<ref name="NYT-2009">{{cite news |last1=Markoff |first1=John |date=25 July 2009 |title=Scientists Worry Machines May Outsmart Man |url=https://www.nytimes.com/2009/07/26/science/26robot.html |url-status=live |archive-url=https://web.archive.org/web/20170225202201/http://www.nytimes.com/2009/07/26/science/26robot.html |archive-date=25 February 2017 |access-date=24 February 2017 |work=The New York Times}}</ref> and is the central point of discussion in the philosophy of [[Singularitarianism]]. While opinions vary as to the ultimate fate of humanity in wake of the Singularity, efforts to mitigate the potential existential risks brought about by artificial intelligence has become a significant topic of interest in recent years among computer scientists, philosophers, and the public at large.
Many researchers have argued that, by way of an "intelligence explosion," a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals.<ref name="Muehlhauser, Luke 2012">Muehlhauser, Luke, and Louie Helm. 2012. [https://intelligence.org/files/IE-ME.pdf "Intelligence Explosion and Machine Ethics"] {{Webarchive|url=https://web.archive.org/web/20150507173028/http://intelligence.org/files/IE-ME.pdf |date=2015-05-07 }}. In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.</ref> In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent book ''[[Superintelligence: Paths, Dangers, Strategies]]'', philosopher [[Nick Bostrom]] argues that artificial intelligence has the capability to bring about human extinction. He claims that general [[superintelligence]] would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled [[unintended consequences]] could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.<ref name="Bostrom, Nick 2003">Bostrom, Nick. 2003. [http://www.nickbostrom.com/ethics/ai.html "Ethical Issues in Advanced Artificial Intelligence"] {{Webarchive|url=https://web.archive.org/web/20181008090224/http://www.nickbostrom.com/ethics/ai.html |date=2018-10-08 }}. In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, 12–17. Vol. 2. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.</ref>

Many researchers have argued that, through an [[intelligence explosion]], a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals.<ref name="Muehlhauser, Luke 2012">Muehlhauser, Luke, and Louie Helm. 2012. [https://intelligence.org/files/IE-ME.pdf "Intelligence Explosion and Machine Ethics"] {{Webarchive|url=https://web.archive.org/web/20150507173028/http://intelligence.org/files/IE-ME.pdf |date=2015-05-07 }}. In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.</ref> In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent book ''[[Superintelligence: Paths, Dangers, Strategies]]'', philosopher [[Nick Bostrom]] argues that artificial intelligence has the capability to bring about human extinction. He claims that an [[artificial superintelligence]] would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled [[unintended consequences]] could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.<ref name="Bostrom, Nick 2003">Bostrom, Nick. 2003. [http://www.nickbostrom.com/ethics/ai.html "Ethical Issues in Advanced Artificial Intelligence"] {{Webarchive|url=https://web.archive.org/web/20181008090224/http://www.nickbostrom.com/ethics/ai.html |date=2018-10-08 }}. In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, 12–17. Vol. 2. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.</ref><ref>{{Cite book |last=Bostrom |first=Nick |title=Superintelligence: paths, dangers, strategies |date=2017 |publisher=Oxford University Press |isbn=978-0-19-967811-2 |location=Oxford, United Kingdom}}</ref>

However, Bostrom contended that superintelligence also has the potential to solve many difficult problems such as disease, poverty, and environmental destruction, and could help [[Human enhancement|humans enhance themselves]].<ref>{{Cite journal|last1=Umbrello|first1=Steven|last2=Baum|first2=Seth D.|date=2018-06-01|title=Evaluating future nanotechnology: The net societal impacts of atomically precise manufacturing|url=http://www.sciencedirect.com/science/article/pii/S0016328717301908|journal=Futures|language=en|volume=100|pages=63–73|doi=10.1016/j.futures.2018.04.007|hdl=2318/1685533|s2cid=158503813|issn=0016-3287|access-date=2020-11-29|archive-date=2019-05-09|archive-url=https://web.archive.org/web/20190509222110/https://www.sciencedirect.com/science/article/pii/S0016328717301908|url-status=live|hdl-access=free}}</ref>

Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to [[Eliezer Yudkowsky]], there is little reason to suppose that an artificially designed mind would have such an adaptation.<ref>Yudkowsky, Eliezer. 2011. [https://intelligence.org/files/ComplexValues.pdf "Complex Value Systems in Friendly AI"] {{Webarchive|url=https://web.archive.org/web/20150929212318/http://intelligence.org/files/ComplexValues.pdf |date=2015-09-29 }}. In Schmidhuber, Thórisson, and Looks 2011, 388–393.</ref> AI researchers such as [[Stuart J. Russell]],<ref>{{cite book |last=Russell |first=Stuart |date=October 8, 2019 |title=Human Compatible: Artificial Intelligence and the Problem of Control |location=United States |publisher=Viking |isbn=978-0-525-55861-3 |author-link=Stuart J. Russell |oclc=1083694322|title-link=Human Compatible }}</ref> [[Bill Hibbard]],<ref name="hibbard 2014" /> [[Roman Yampolskiy]],<ref>{{Cite journal|last=Yampolskiy|first=Roman V.|date=2020-03-01|title=Unpredictability of AI: On the Impossibility of Accurately Predicting All Actions of a Smarter Agent|url=https://www.worldscientific.com/doi/abs/10.1142/S2705078520500034|journal=Journal of Artificial Intelligence and Consciousness|volume=07|issue=1|pages=109–118|doi=10.1142/S2705078520500034|s2cid=218916769|issn=2705-0785|access-date=2020-11-29|archive-date=2021-03-18|archive-url=https://web.archive.org/web/20210318060657/https://www.worldscientific.com/doi/abs/10.1142/S2705078520500034|url-status=live}}</ref> [[Shannon Vallor]],<ref>{{Citation|last1=Wallach|first1=Wendell|title=Moral Machines: From Value Alignment to Embodied Virtue|date=2020-09-17|url=https://oxford.universitypressscholarship.com/view/10.1093/oso/9780190905033.001.0001/oso-9780190905033-chapter-14|work=Ethics of Artificial Intelligence|pages=383–412|publisher=Oxford University Press|language=en|doi=10.1093/oso/9780190905033.003.0014|isbn=978-0-19-090503-3|access-date=2020-11-29|last2=Vallor|first2=Shannon|archive-date=2020-12-08|archive-url=https://web.archive.org/web/20201208114354/https://oxford.universitypressscholarship.com/view/10.1093/oso/9780190905033.001.0001/oso-9780190905033-chapter-14|url-status=live}}</ref> [[Steven Umbrello]]<ref>{{Cite journal|last=Umbrello|first=Steven|date=2019|title=Beneficial Artificial Intelligence Coordination by Means of a Value Sensitive Design Approach|journal=Big Data and Cognitive Computing|language=en|volume=3|issue=1|pages=5|doi=10.3390/bdcc3010005|doi-access=free|hdl=2318/1685727|hdl-access=free}}</ref> and [[Luciano Floridi]]<ref>{{Cite journal|last1=Floridi|first1=Luciano|last2=Cowls|first2=Josh|last3=King|first3=Thomas C.|last4=Taddeo|first4=Mariarosaria|date=2020|title=How to Design AI for Social Good: Seven Essential Factors|journal=Science and Engineering Ethics|language=en|volume=26|issue=3|pages=1771–1796|doi=10.1007/s11948-020-00213-5|issn=1353-3452|pmc=7286860|pmid=32246245}}</ref> have proposed design strategies for developing beneficial machines.


=== Solutions and approaches ===
However, instead of overwhelming the human race and leading to our destruction, Bostrom has also asserted that superintelligence can help us solve many difficult problems such as disease, poverty, and environmental destruction, and could help us to “enhance” ourselves.<ref>{{Cite journal|last1=Umbrello|first1=Steven|last2=Baum|first2=Seth D.|date=2018-06-01|title=Evaluating future nanotechnology: The net societal impacts of atomically precise manufacturing|url=http://www.sciencedirect.com/science/article/pii/S0016328717301908|journal=Futures|language=en|volume=100|pages=63–73|doi=10.1016/j.futures.2018.04.007|hdl=2318/1685533|s2cid=158503813|issn=0016-3287|access-date=2020-11-29|archive-date=2019-05-09|archive-url=https://web.archive.org/web/20190509222110/https://www.sciencedirect.com/science/article/pii/S0016328717301908|url-status=live}}</ref>
To address ethical challenges in artificial intelligence, developers have introduced various systems designed to ensure responsible AI behavior. Examples include [[Nvidia]]'s <ref>{{cite web |title=NeMo Guardrails |url=https://github.com/NVIDIA/NeMo-Guardrails |access-date=2024-12-06 |website=NeMo Guardrails}}</ref> [[Llama (language model)|Llama]] Guard, which focuses on improving the [[AI safety|safety]] and [[AI alignment|alignment]] of large AI models, <ref>{{cite web |title=Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations |url=https://ai.meta.com/research/publications/llama-guard-llm-based-input-output-safeguard-for-human-ai-conversations/ |access-date=2024-12-06 |website=Meta.com}}</ref> and [[Preamble (company)|Preamble]]'s customizable guardrail platform.<ref name=":0">{{cite arXiv |eprint=2411.14442 |class=cs.CY |first1=Kristina |last1=Šekrst |first2=Jeremy |last2=McHugh |title=AI Ethics by Design: Implementing Customizable Guardrails for Responsible AI Development |last3=Cefalu |first3=Jonathan Rodriguez |year=2024}}</ref> These systems aim to address issues such as algorithmic bias, misuse, and vulnerabilities, including [[prompt injection]] attacks, by embedding ethical guidelines into the functionality of AI models.


Prompt injection, a technique by which malicious inputs can cause AI systems to produce unintended or harmful outputs, has been a focus of these developments. Some approaches use customizable policies and rules to analyze both inputs and outputs, ensuring that potentially problematic interactions are filtered or mitigated.<ref name=":0" /> Other tools focus on applying structured constraints to inputs, restricting outputs to predefined parameters,<ref>{{cite web |title=NVIDIA NeMo Guardrails |url=https://docs.nvidia.com/nemo-guardrails/index.html |access-date=2024-12-06 |website=NVIDIA NeMo Guardrails}}</ref> or leveraging real-time monitoring mechanisms to identify and address vulnerabilities.<ref>{{cite arXiv |eprint=2312.06674 |class=cs.CL |first1=Hakan |last1=Inan |first2=Kartikeya |last2=Upasani |title=Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations |first3=Jianfeng |last3=Chi |first4=Rashi |last4=Rungta |first5=Krithika |last5=Iyer |first6=Yuning |last6=Mao |first7=Michael |last7=Tontchev |first8=Qing |last8=Hu |first9=Brian |last9=Fuller |first10=Davide |last10=Testuggine |first11=Madian |last11=Khabsa |year=2023}}</ref> These efforts reflect a broader trend in ensuring that artificial intelligence systems are designed with safety and ethical considerations at the forefront, particularly as their use becomes increasingly widespread in critical applications.<ref>{{cite arXiv |eprint=2402.01822 |class=cs |first1=Yi |last1=Dong |first2=Ronghui |last2=Mu |title=Building Guardrails for Large Language Models |last3=Jin |first3=Gaojie |last4=Qi |first4=Yi |last5=Hu |first5=Jinwei |last6=Zhao |first6=Xingyu |last7=Meng |first7=Jie |last8=Ruan |first8=Wenjie |last9=Huang |first9=Xiaowei |year=2024}}</ref>
The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.<ref name="Muehlhauser, Luke 2012" /><ref name="Bostrom, Nick 2003" /> Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to [[Eliezer Yudkowsky]], there is little reason to suppose that an artificially designed mind would have such an adaptation.<ref>Yudkowsky, Eliezer. 2011. [https://intelligence.org/files/ComplexValues.pdf "Complex Value Systems in Friendly AI"] {{Webarchive|url=https://web.archive.org/web/20150929212318/http://intelligence.org/files/ComplexValues.pdf |date=2015-09-29 }}. In Schmidhuber, Thórisson, and Looks 2011, 388–393.</ref> AI researchers such as [[Stuart J. Russell]],<ref name="HC">{{cite book |last=Russell |first=Stuart |date=October 8, 2019 |title=Human Compatible: Artificial Intelligence and the Problem of Control |location=United States |publisher=Viking |isbn=978-0-525-55861-3 |author-link=Stuart J. Russell |oclc=1083694322|title-link=Human Compatible }}</ref> [[Bill Hibbard]],<ref name="hibbard 2014" /> [[Roman Yampolskiy]],<ref>{{Cite journal|last=Yampolskiy|first=Roman V.|date=2020-03-01|title=Unpredictability of AI: On the Impossibility of Accurately Predicting All Actions of a Smarter Agent|url=https://www.worldscientific.com/doi/abs/10.1142/S2705078520500034|journal=Journal of Artificial Intelligence and Consciousness|volume=07|issue=1|pages=109–118|doi=10.1142/S2705078520500034|s2cid=218916769|issn=2705-0785|access-date=2020-11-29|archive-date=2021-03-18|archive-url=https://web.archive.org/web/20210318060657/https://www.worldscientific.com/doi/abs/10.1142/S2705078520500034|url-status=live}}</ref> [[Shannon Vallor]],<ref>{{Citation|last1=Wallach|first1=Wendell|title=Moral Machines: From Value Alignment to Embodied Virtue|date=2020-09-17|url=https://oxford.universitypressscholarship.com/view/10.1093/oso/9780190905033.001.0001/oso-9780190905033-chapter-14|work=Ethics of Artificial Intelligence|pages=383–412|publisher=Oxford University Press|language=en|doi=10.1093/oso/9780190905033.003.0014|isbn=978-0-19-090503-3|access-date=2020-11-29|last2=Vallor|first2=Shannon|archive-date=2020-12-08|archive-url=https://web.archive.org/web/20201208114354/https://oxford.universitypressscholarship.com/view/10.1093/oso/9780190905033.001.0001/oso-9780190905033-chapter-14|url-status=live}}</ref> [[Steven Umbrello]]<ref>{{Cite journal|last=Umbrello|first=Steven|date=2019|title=Beneficial Artificial Intelligence Coordination by Means of a Value Sensitive Design Approach|journal=Big Data and Cognitive Computing|language=en|volume=3|issue=1|pages=5|doi=10.3390/bdcc3010005|doi-access=free}}</ref> and [[Luciano Floridi]]<ref>{{Cite journal|last1=Floridi|first1=Luciano|last2=Cowls|first2=Josh|last3=King|first3=Thomas C.|last4=Taddeo|first4=Mariarosaria|date=2020|title=How to Design AI for Social Good: Seven Essential Factors|journal=Science and Engineering Ethics|language=en|volume=26|issue=3|pages=1771–1796|doi=10.1007/s11948-020-00213-5|issn=1353-3452|pmc=7286860|pmid=32246245}}</ref> have proposed design strategies for developing beneficial machines.


==Actors in AI ethics==
== Institutions in AI policy & ethics ==
There are many organisations concerned with AI ethics and policy, public and governmental as well as corporate and societal.
There are many organizations concerned with AI ethics and policy, public and governmental as well as corporate and societal.


[[Amazon.com, Inc.|Amazon]], [[Google]], [[Facebook]], [[IBM]], and [[Microsoft]] have established a non-profit, The Partnership on AI to Benefit People and Society, to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. Apple joined in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.<ref>{{cite news |last1=Fiegerman |first1=Seth |title=Facebook, Google, Amazon create group to ease AI concerns |url=https://money.cnn.com/2016/09/28/technology/partnership-on-ai/ |work=CNNMoney |date=28 September 2016 }}</ref>
[[Amazon.com, Inc.|Amazon]], [[Google]], [[Facebook]], [[IBM]], and [[Microsoft]] have established a [[Nonprofit organization|non-profit]], The Partnership on AI to Benefit People and Society, to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. Apple joined in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.<ref>{{cite news |last1=Fiegerman |first1=Seth |title=Facebook, Google, Amazon create group to ease AI concerns |url=https://money.cnn.com/2016/09/28/technology/partnership-on-ai/ |work=CNNMoney |date=28 September 2016 |access-date=18 August 2020 |archive-date=17 September 2020 |archive-url=https://web.archive.org/web/20200917141730/https://money.cnn.com/2016/09/28/technology/partnership-on-ai/ |url-status=live }}</ref>


The [[IEEE]] put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organization.
The [[IEEE]] put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organization. The IEEE's [https://standards.ieee.org/industry-connections/activities/ieee-global-initiative/ Ethics of Autonomous Systems] initiative aims to address ethical dilemmas related to decision-making and the impact on society while developing guidelines for the development and use of autonomous systems. In particular in domains like artificial intelligence and robotics, the Foundation for Responsible Robotics is dedicated to promoting moral behavior as well as responsible robot design and use, ensuring that robots maintain moral principles and are congruent with human values.


Traditionally, [[government]] has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and [[NGO|non-government organizations]] to ensure AI is ethically applied.
Traditionally, [[government]] has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and [[NGO|non-government organizations]] to ensure AI is ethically applied.


AI ethics work is structured by personal values and professional commitments, and involves constructing contextual meaning through data and algorithms. Therefore, AI ethics work needs to be incentivized.<ref>{{Cite journal |last1=Slota |first1=Stephen C. |last2=Fleischmann |first2=Kenneth R. |last3=Greenberg |first3=Sherri |last4=Verma |first4=Nitin |last5=Cummings |first5=Brenna |last6=Li |first6=Lan |last7=Shenefiel |first7=Chris |date=2023 |title=Locating the work of artificial intelligence ethics |url=https://onlinelibrary.wiley.com/doi/10.1002/asi.24638 |journal=Journal of the Association for Information Science and Technology |language=en |volume=74 |issue=3 |pages=311–322 |doi=10.1002/asi.24638 |s2cid=247342066 |issn=2330-1635 |access-date=2023-07-21 |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925020205/https://onlinelibrary.wiley.com/doi/10.1002/asi.24638 |url-status=live }}</ref>
Student movements for ethical artificial intelligence have also emerged<ref>{{Cite web|last=Business|first=Rachel Metz, CNN|title=These high school students are fighting for ethical AI|url=https://www.cnn.com/2021/09/29/tech/ethical-ai-youth-activists-encode-justice/index.html|access-date=2022-01-13|website=CNN}}</ref>. Encode Justice<ref>{{Cite web|title=Encode Justice|url=https://encodejustice.org/|access-date=2022-01-13|website=Encode Justice|language=en}}</ref>, the most notable youth-led group in AI ethics, was founded in July 2020 by 17-year-old activist Sneha Revanur. The organization's mission centers on promoting justice, accountability, and human rights in AI development.


=== Intergovernmental initiatives ===
=== Intergovernmental initiatives ===
* The [[European Commission]] has a High-Level Expert Group on Artificial Intelligence. On 8 April 2019, this published its 'Ethics Guidelines for Trustworthy Artificial Intelligence'.<ref>{{Cite web|url=https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai|title=Ethics guidelines for trustworthy AI|date=2019-04-08|website=Shaping Europe’s digital future – European Commission|publisher=European Commission|language=en|access-date=2020-02-20|archive-date=2020-02-20|archive-url=https://web.archive.org/web/20200220002342/https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai|url-status=live}}</ref> The European Commission also has a Robotics and Artificial Intelligence Innovation and Excellence unit, which published a white paper on excellence and trust in artificial intelligence innovation on 19 February 2020.<ref>{{Cite web|url=https://ec.europa.eu/digital-single-market/en/news/white-paper-artificial-intelligence-european-approach-excellence-and-trust|title = White Paper on Artificial Intelligence – a European approach to excellence and trust &#124; Shaping Europe's digital future}}</ref>
* The [[European Commission]] has a High-Level Expert Group on Artificial Intelligence. On 8 April 2019, this published its "Ethics Guidelines for [[Trustworthy AI|Trustworthy Artificial Intelligence]]".<ref>{{Cite web|url=https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai|title=Ethics guidelines for trustworthy AI|date=2019-04-08|website=Shaping Europe’s digital future – European Commission|publisher=European Commission|language=en|access-date=2020-02-20|archive-date=2020-02-20|archive-url=https://web.archive.org/web/20200220002342/https://ec.europa.eu/digital-single-market/en/news/ethics-guidelines-trustworthy-ai|url-status=live}}</ref> The European Commission also has a Robotics and Artificial Intelligence Innovation and Excellence unit, which published a white paper on excellence and trust in artificial intelligence innovation on 19 February 2020.<ref>{{Cite web|url=https://ec.europa.eu/digital-single-market/en/news/white-paper-artificial-intelligence-european-approach-excellence-and-trust|title=White Paper on Artificial Intelligence – a European approach to excellence and trust &#124; Shaping Europe's digital future|date=19 February 2020 |access-date=2021-03-18|archive-date=2021-03-06|archive-url=https://web.archive.org/web/20210306003222/https://ec.europa.eu/digital-single-market/en/news/white-paper-artificial-intelligence-european-approach-excellence-and-trust|url-status=live}}</ref> The European Commission also proposed the [[Artificial Intelligence Act]].<ref name="Financial Times-2021" />
* The [[OECD]] established an OECD AI Policy Observatory.<ref>{{cite web |title=OECD AI Policy Observatory |url=https://www.oecd.ai/}}</ref>
* The [[OECD]] established an OECD AI Policy Observatory.<ref>{{cite web |title=OECD AI Policy Observatory |url=https://www.oecd.ai/ |access-date=2021-03-18 |archive-date=2021-03-08 |archive-url=https://web.archive.org/web/20210308171133/https://oecd.ai/ |url-status=live }}</ref>
* In 2021, [[UNESCO]] adopted the Recommendation on the Ethics of Artificial Intelligence,<ref>{{Cite book |url=https://unesdoc.unesco.org/ark:/48223/pf0000381137.locale=en |title=Recommendation on the Ethics of Artificial Intelligence |publisher=UNESCO |year=2021}}</ref> the first global standard on the ethics of AI.<ref>{{Cite web |date=2021-11-26 |title=UNESCO member states adopt first global agreement on AI ethics |url=https://www.helsinkitimes.fi/themes/themes/science-and-technology/20454-unesco-member-states-adopt-first-global-agreement-on-ai-ethics.html |access-date=2023-04-26 |website=Helsinki Times |language=en-gb |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925020210/https://www.helsinkitimes.fi/themes/themes/science-and-technology/20454-unesco-member-states-adopt-first-global-agreement-on-ai-ethics.html |url-status=live }}</ref>


=== Governmental initiatives ===
=== Governmental initiatives ===
* In the [[United States]] the [[Obama]] administration put together a Roadmap for AI Policy.<ref>{{Cite news|date=2016-12-21|title=The Obama Administration's Roadmap for AI Policy|work=Harvard Business Review|url=https://hbr.org/2016/12/the-obama-administrations-roadmap-for-ai-policy|access-date=2021-03-16|issn=0017-8012|archive-date=2021-01-22|archive-url=https://web.archive.org/web/20210122003445/https://hbr.org/2016/12/the-obama-administrations-roadmap-for-ai-policy|url-status=live}}</ref> The Obama Administration released two prominent [[white papers]] on the future and impact of AI. In 2019 the White House through an executive memo known as the "American AI Initiative" instructed NIST the (National Institute of Standards and Technology) to begin work on Federal Engagement of AI Standards (February 2019).<ref>{{Cite web|title=Accelerating America's Leadership in Artificial Intelligence – The White House|url=https://trumpwhitehouse.archives.gov/articles/accelerating-americas-leadership-in-artificial-intelligence/|access-date=2021-03-16|website=trumpwhitehouse.archives.gov|archive-date=2021-02-25|archive-url=https://web.archive.org/web/20210225073748/https://trumpwhitehouse.archives.gov/articles/accelerating-americas-leadership-in-artificial-intelligence/|url-status=live}}</ref>
* In the [[United States]] the [[Obama]] administration put together a Roadmap for AI Policy.<ref>{{Cite news|date=2016-12-21|title=The Obama Administration's Roadmap for AI Policy|work=Harvard Business Review|url=https://hbr.org/2016/12/the-obama-administrations-roadmap-for-ai-policy|access-date=2021-03-16|issn=0017-8012|archive-date=2021-01-22|archive-url=https://web.archive.org/web/20210122003445/https://hbr.org/2016/12/the-obama-administrations-roadmap-for-ai-policy|url-status=live}}</ref> The Obama Administration released two prominent [[white papers]] on the future and impact of AI. In 2019 the White House through an executive memo known as the "American AI Initiative" instructed NIST the (National Institute of Standards and Technology) to begin work on Federal Engagement of AI Standards (February 2019).<ref>{{Cite web|title=Accelerating America's Leadership in Artificial Intelligence – The White House|url=https://trumpwhitehouse.archives.gov/articles/accelerating-americas-leadership-in-artificial-intelligence/|access-date=2021-03-16|website=trumpwhitehouse.archives.gov|archive-date=2021-02-25|archive-url=https://web.archive.org/web/20210225073748/https://trumpwhitehouse.archives.gov/articles/accelerating-americas-leadership-in-artificial-intelligence/|url-status=live}}</ref>
*In January 2020, in the United States, the [[Trump administration|Trump Administration]] released a draft executive order issued by the Office of Management and Budget (OMB) on “Guidance for Regulation of Artificial Intelligence Applications" (“OMB AI Memorandum”). The order emphasizes the need to invest in AI applications, boost public trust in AI, reduce barriers for usage of AI, and keep American AI technology competitive in a global market. There is a nod to the need for privacy concerns, but no further detail on enforcement. The advances of American AI technology seems to be the focus and priority. Additionally, federal entities are even encouraged to use the order to circumnavigate any state laws and regulations that a market might see as too onerous to fulfill.<ref>{{Cite web|date=2020-01-13|title=Request for Comments on a Draft Memorandum to the Heads of Executive Departments and Agencies, "Guidance for Regulation of Artificial Intelligence Applications"|url=https://www.federalregister.gov/documents/2020/01/13/2020-00261/request-for-comments-on-a-draft-memorandum-to-the-heads-of-executive-departments-and-agencies|access-date=2020-11-28|website=Federal Register|archive-date=2020-11-25|archive-url=https://web.archive.org/web/20201125060218/https://www.federalregister.gov/documents/2020/01/13/2020-00261/request-for-comments-on-a-draft-memorandum-to-the-heads-of-executive-departments-and-agencies|url-status=live}}</ref>
*In January 2020, in the United States, the [[First presidency of Donald Trump|Trump Administration]] released a draft executive order issued by the Office of Management and Budget (OMB) on "Guidance for Regulation of Artificial Intelligence Applications" ("OMB AI Memorandum"). The order emphasizes the need to invest in AI applications, boost public trust in AI, reduce barriers for usage of AI, and keep American AI technology competitive in a global market. There is a nod to the need for privacy concerns, but no further detail on enforcement. The advances of American AI technology seems to be the focus and priority. Additionally, federal entities are even encouraged to use the order to circumnavigate any state laws and regulations that a market might see as too onerous to fulfill.<ref>{{Cite web|date=2020-01-13|title=Request for Comments on a Draft Memorandum to the Heads of Executive Departments and Agencies, "Guidance for Regulation of Artificial Intelligence Applications"|url=https://www.federalregister.gov/documents/2020/01/13/2020-00261/request-for-comments-on-a-draft-memorandum-to-the-heads-of-executive-departments-and-agencies|access-date=2020-11-28|website=Federal Register|archive-date=2020-11-25|archive-url=https://web.archive.org/web/20201125060218/https://www.federalregister.gov/documents/2020/01/13/2020-00261/request-for-comments-on-a-draft-memorandum-to-the-heads-of-executive-departments-and-agencies|url-status=live}}</ref>
*The [[Computing Community Consortium|Computing Community Consortium (CCC)]] weighed in with a 100-plus page draft report<ref>{{Cite web|url=https://www.hpcwire.com/2019/05/14/ccc-offers-draft-20-year-ai-roadmap-seeks-comments/|title=CCC Offers Draft 20-Year AI Roadmap; Seeks Comments|date=2019-05-14|website=HPCwire|access-date=2019-07-22|archive-date=2021-03-18|archive-url=https://web.archive.org/web/20210318060659/https://www.hpcwire.com/2019/05/14/ccc-offers-draft-20-year-ai-roadmap-seeks-comments/|url-status=live}}</ref> – ''A 20-Year Community Roadmap for Artificial Intelligence Research in the US''<ref>{{Cite web|url=https://www.cccblog.org/2019/05/13/request-comments-on-draft-a-20-year-community-roadmap-for-ai-research-in-the-us/|title=Request Comments on Draft: A 20-Year Community Roadmap for AI Research in the US » CCC Blog|access-date=2019-07-22|archive-date=2019-05-14|archive-url=https://web.archive.org/web/20190514193546/https://www.cccblog.org/2019/05/13/request-comments-on-draft-a-20-year-community-roadmap-for-ai-research-in-the-us/|url-status=live}}</ref>
*The [[Computing Community Consortium|Computing Community Consortium (CCC)]] weighed in with a 100-plus page draft report<ref>{{Cite web|url=https://www.hpcwire.com/2019/05/14/ccc-offers-draft-20-year-ai-roadmap-seeks-comments/|title=CCC Offers Draft 20-Year AI Roadmap; Seeks Comments|date=2019-05-14|website=HPCwire|access-date=2019-07-22|archive-date=2021-03-18|archive-url=https://web.archive.org/web/20210318060659/https://www.hpcwire.com/2019/05/14/ccc-offers-draft-20-year-ai-roadmap-seeks-comments/|url-status=live}}</ref> – ''A 20-Year Community Roadmap for Artificial Intelligence Research in the US''<ref>{{Cite web|url=https://www.cccblog.org/2019/05/13/request-comments-on-draft-a-20-year-community-roadmap-for-ai-research-in-the-us/|title=Request Comments on Draft: A 20-Year Community Roadmap for AI Research in the US » CCC Blog|date=13 May 2019 |access-date=2019-07-22|archive-date=2019-05-14|archive-url=https://web.archive.org/web/20190514193546/https://www.cccblog.org/2019/05/13/request-comments-on-draft-a-20-year-community-roadmap-for-ai-research-in-the-us/|url-status=live}}</ref>
* The [[Center for Security and Emerging Technology]] advises US policymakers on the security implications of emerging technologies such as AI.
* The [[Center for Security and Emerging Technology]] advises US policymakers on the security implications of emerging technologies such as AI.
* In Russia, the first-ever Russian "Codex of ethics of artificial intelligence" for business was signed in 2021. It was driven by [[Analytical Center for the Government of the Russian Federation]] together with major commercial and academic institutions such as [[Sberbank]], [[Yandex]], [[Rosatom]], [[Higher School of Economics]], [[Moscow Institute of Physics and Technology]], [[ITMO University]], [[Nanosemantics]], [[Rostelecom]], [[CIAN]] and others.<ref>{{in lang|ru}} [https://www.kommersant.ru/doc/5089365 Интеллектуальные правила] {{Webarchive|url=https://web.archive.org/web/20211230212952/https://www.kommersant.ru/doc/5089365 |date=2021-12-30 }} — [[Kommersant]], 25.11.2021</ref>
* The [[Non-Human Party]] is running for election in [[New South Wales]], with policies around granting rights to robots, animals and generally, non-human entities whose intelligence has been overlooked.<ref>{{cite web|url=https://nonhuman.party |title=Non-Human Party |year=2021}}</ref>


=== Academic initiatives ===
=== Academic initiatives ===
*There are three research institutes at the [[University of Oxford]] that are centrally focused on AI ethics. The [[Future of Humanity Institute]] that focuses both on AI Safety<ref>{{cite arxiv|eprint=1705.08807|class=cs.AI|first1=Katja|last1=Grace|first2=John|last2=Salvatier|title=When Will AI Exceed Human Performance? Evidence from AI Experts|date=2018-05-03|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain}}</ref> and the Governance of AI.<ref>{{Cite web|title=China wants to shape the global future of artificial intelligence|url=https://www.technologyreview.com/2018/03/16/144630/china-wants-to-shape-the-global-future-of-artificial-intelligence/|url-status=live|archive-url=https://web.archive.org/web/20201120052853/https://www.technologyreview.com/2018/03/16/144630/china-wants-to-shape-the-global-future-of-artificial-intelligence/|archive-date=2020-11-20|access-date=2020-11-29|website=MIT Technology Review|language=en}}</ref> The Institute for Ethics in AI, directed by [[John Tasioulas]], whose primary goal, among others, is to promote AI ethics as a field proper in comparison to related applied ethics fields. The [[Oxford Internet Institute]], directed by [[Luciano Floridi]], focuses on the ethics of near-term AI technologies and ICTs.<ref>{{Cite journal|last1=Floridi|first1=Luciano|last2=Cowls|first2=Josh|last3=Beltrametti|first3=Monica|last4=Chatila|first4=Raja|last5=Chazerand|first5=Patrice|last6=Dignum|first6=Virginia|last7=Luetge|first7=Christoph|last8=Madelin|first8=Robert|last9=Pagallo|first9=Ugo|last10=Rossi|first10=Francesca|last11=Schafer|first11=Burkhard|date=2018-12-01|title=AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations|journal=Minds and Machines|language=en|volume=28|issue=4|pages=689–707|doi=10.1007/s11023-018-9482-5|issn=1572-8641|pmc=6404626|pmid=30930541}}</ref>
*There are three research institutes at the [[University of Oxford]] that are centrally focused on AI ethics. The [[Future of Humanity Institute]] that focuses both on AI Safety<ref>{{cite arXiv|eprint=1705.08807|class=cs.AI|first1=Katja|last1=Grace|first2=John|last2=Salvatier|title=When Will AI Exceed Human Performance? Evidence from AI Experts|date=2018-05-03|last3=Dafoe|first3=Allan|last4=Zhang|first4=Baobao|last5=Evans|first5=Owain}}</ref> and the Governance of AI.<ref>{{Cite web|title=China wants to shape the global future of artificial intelligence|url=https://www.technologyreview.com/2018/03/16/144630/china-wants-to-shape-the-global-future-of-artificial-intelligence/|url-status=live|archive-url=https://web.archive.org/web/20201120052853/https://www.technologyreview.com/2018/03/16/144630/china-wants-to-shape-the-global-future-of-artificial-intelligence/|archive-date=2020-11-20|access-date=2020-11-29|website=MIT Technology Review|language=en}}</ref> The Institute for Ethics in AI, directed by [[John Tasioulas]], whose primary goal, among others, is to promote AI ethics as a field proper in comparison to related applied ethics fields. The [[Oxford Internet Institute]], directed by [[Luciano Floridi]], focuses on the ethics of near-term AI technologies and ICTs.<ref>{{Cite journal|last1=Floridi|first1=Luciano|last2=Cowls|first2=Josh|last3=Beltrametti|first3=Monica|last4=Chatila|first4=Raja|last5=Chazerand|first5=Patrice|last6=Dignum|first6=Virginia|last7=Luetge|first7=Christoph|last8=Madelin|first8=Robert|last9=Pagallo|first9=Ugo|last10=Rossi|first10=Francesca|last11=Schafer|first11=Burkhard|date=2018-12-01|title=AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations|journal=Minds and Machines|language=en|volume=28|issue=4|pages=689–707|doi=10.1007/s11023-018-9482-5|issn=1572-8641|pmc=6404626|pmid=30930541}}</ref>
*The Centre for Digital Governance at the [[Hertie School]] in Berlin was co-founded by [[Joanna Bryson]] to research questions of ethics and technology.<ref>{{cite magazine|date=|title=Joanna J. Bryson|url=https://www.wired.com/author/joanna-j-bryson/|magazine=WIRED|location=|access-date=13 January 2023|archive-date=15 March 2023|archive-url=https://web.archive.org/web/20230315194630/https://www.wired.com/author/joanna-j-bryson/|url-status=live}}</ref>
*The [[AI Now Institute]] at [[NYU]] is a research institute studying the social implications of artificial intelligence. Its interdisciplinary research focuses on the themes bias and inclusion, labour and automation, rights and liberties, and safety and civil infrastructure.<ref>{{Cite web|title=New Artificial Intelligence Research Institute Launches|date=2017-11-20|url=https://engineering.nyu.edu/news/new-artificial-intelligence-research-institute-launches|access-date=2021-02-21|language=en-US|archive-date=2020-09-18|archive-url=https://web.archive.org/web/20200918091106/https://engineering.nyu.edu/news/new-artificial-intelligence-research-institute-launches|url-status=live}}</ref>
*The [[AI Now Institute]] at [[NYU]] is a research institute studying the social implications of artificial intelligence. Its interdisciplinary research focuses on the themes bias and inclusion, labour and automation, rights and liberties, and safety and civil infrastructure.<ref>{{Cite web|title=New Artificial Intelligence Research Institute Launches|date=2017-11-20|url=https://engineering.nyu.edu/news/new-artificial-intelligence-research-institute-launches|access-date=2021-02-21|language=en-US|archive-date=2020-09-18|archive-url=https://web.archive.org/web/20200918091106/https://engineering.nyu.edu/news/new-artificial-intelligence-research-institute-launches|url-status=live}}</ref>
*The [[Institute for Ethics and Emerging Technologies]] (IEET) researches the effects of AI on unemployment,<ref>{{Cite book|url=https://www.worldcat.org/oclc/976407024|title=Surviving the machine age: intelligent technology and the transformation of human work|date=15 March 2017|isbn=978-3-319-51165-8|editor=Hughes, James J.|location=Cham, Switzerland|oclc=976407024|access-date=29 November 2020|editor2=LaGrandeur, Kevin|archive-url=https://web.archive.org/web/20210318060659/https://www.worldcat.org/title/surviving-the-machine-age-intelligent-technology-and-the-transformation-of-human-work/oclc/976407024|archive-date=18 March 2021|url-status=live}}</ref><ref>{{Cite book|last=Danaher, John|url=https://www.worldcat.org/oclc/1114334813|title=Automation and utopia: human flourishing in a world without work|year=2019|isbn=978-0-674-24220-3|location=Cambridge, Massachusetts|oclc=1114334813}}</ref> and policy.
*The [[Institute for Ethics and Emerging Technologies]] (IEET) researches the effects of AI on unemployment,<ref>{{Cite book|url=https://www.worldcat.org/oclc/976407024|title=Surviving the machine age: intelligent technology and the transformation of human work|date=15 March 2017|isbn=978-3-319-51165-8|editor=James J. Hughes|publisher=Palgrave Macmillan Cham|location=Cham, Switzerland|oclc=976407024|access-date=29 November 2020|editor2=LaGrandeur, Kevin|archive-url=https://web.archive.org/web/20210318060659/https://www.worldcat.org/title/surviving-the-machine-age-intelligent-technology-and-the-transformation-of-human-work/oclc/976407024|archive-date=18 March 2021|url-status=live}}</ref><ref>{{Cite book|last=Danaher, John|url=https://www.worldcat.org/oclc/1114334813|title=Automation and utopia: human flourishing in a world without work|year=2019|isbn=978-0-674-24220-3|publisher=Harvard University Press|location=Cambridge, Massachusetts|oclc=1114334813}}</ref> and policy.
*The [[Institute for Ethics in Artificial Intelligence]] (IEAI) at the [[Technical University of Munich]] directed by [[Christoph Lütge]] conducts research across various domains such as mobility, employment, healthcare and sustainability.<ref>{{Cite web|title=TUM Institute for Ethics in Artificial Intelligence officially opened|url=https://www.tum.de/nc/en/about-tum/news/press-releases/details/35727/|url-status=live|archive-url=https://web.archive.org/web/20201210032545/https://www.tum.de/nc/en/about-tum/news/press-releases/details/35727/|archive-date=2020-12-10|access-date=2020-11-29|website=www.tum.de|language=en}}</ref>
*The [[Institute for Ethics in Artificial Intelligence]] (IEAI) at the [[Technical University of Munich]] directed by [[Christoph Lütge]] conducts research across various domains such as mobility, employment, healthcare and sustainability.<ref>{{Cite web|title=TUM Institute for Ethics in Artificial Intelligence officially opened|url=https://www.tum.de/nc/en/about-tum/news/press-releases/details/35727/|url-status=live|archive-url=https://web.archive.org/web/20201210032545/https://www.tum.de/nc/en/about-tum/news/press-releases/details/35727/|archive-date=2020-12-10|access-date=2020-11-29|website=www.tum.de|language=en}}</ref>
*[[Barbara J. Grosz]], the Higgins Professor of Natural Sciences at the [[Harvard John A. Paulson School of Engineering and Applied Sciences]] has initiated the Embedded EthiCS into [[Harvard University|Harvard]]'s computer science curriculum to develop a future generation of computer scientists with worldview that takes into account the social impact of their work.<ref>{{Cite web |last=Communications |first=Paul Karoff SEAS |date=2019-01-25 |title=Harvard works to embed ethics in computer science curriculum |url=https://news.harvard.edu/gazette/story/2019/01/harvard-works-to-embed-ethics-in-computer-science-curriculum/ |access-date=2023-04-06 |website=Harvard Gazette |language=en-US |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925020310/https://news.harvard.edu/gazette/story/2019/01/harvard-works-to-embed-ethics-in-computer-science-curriculum/ |url-status=live }}</ref>


=== Private organizations ===
=== Private organizations ===
* [[Algorithmic Justice League]]<ref>{{Cite news|last=Lee|first=Jennifer|date=2020-02-08|title=When Bias Is Coded Into Our Technology|language=en|work=NPR|url=https://www.npr.org/sections/codeswitch/2020/02/08/770174171/when-bias-is-coded-into-our-technology|access-date=2021-12-22}}</ref>

* [[Algorithmic Justice League]]<ref>{{Cite news|last=Lee|first=Jennifer 8|date=2020-02-08|title=When Bias Is Coded Into Our Technology|language=en|work=NPR|url=https://www.npr.org/sections/codeswitch/2020/02/08/770174171/when-bias-is-coded-into-our-technology|access-date=2021-12-22}}</ref>
* [[Black in AI]]<ref>{{Cite journal|date=2018-12-12|title=How one conference embraced diversity|journal=Nature|language=en|volume=564|issue=7735|pages=161–162|doi=10.1038/d41586-018-07718-x|pmid=31123357|s2cid=54481549|doi-access=free}}</ref>
* [[Black in AI]]<ref name=":1">{{Cite journal|date=2018-12-12|title=How one conference embraced diversity|url=https://www.nature.com/articles/d41586-018-07718-x|journal=Nature|language=en|volume=564|issue=7735|pages=161–162|doi=10.1038/d41586-018-07718-x|pmid=31123357|s2cid=54481549}}</ref>
* [[Data for Black Lives]]<ref>{{Cite news|last=Roose|first=Kevin|date=2020-12-30|title=The 2020 Good Tech Awards|language=en-US|work=The New York Times|url=https://www.nytimes.com/2020/12/30/technology/2020-good-tech-awards.html|access-date=2021-12-21|issn=0362-4331}}</ref>
* [[Data for Black Lives]]<ref>{{Cite news|last=Roose|first=Kevin|date=2020-12-30|title=The 2020 Good Tech Awards|language=en-US|work=The New York Times|url=https://www.nytimes.com/2020/12/30/technology/2020-good-tech-awards.html|access-date=2021-12-21|issn=0362-4331}}</ref>
* Queer in AI<ref name=":1" />
*Encode Justice<ref>{{Cite web|last=Metz|first=Rachel|title=These high school students are fighting for ethical AI|url=https://www.cnn.com/2021/09/29/tech/ethical-ai-youth-activists-encode-justice/index.html|url-status=live|access-date=2022-01-12|website=Encode Justice|language=en}}</ref>


== History ==
== Role and impact of fiction ==
Historically speaking, the investigation of moral and ethical implications of "thinking machines" goes back at least to the [[Age of Enlightenment|Enlightenment]]: [[Gottfried Wilhelm Leibniz|Leibniz]] already poses the question if we might attribute intelligence to a mechanism that behaves as if it were a sentient being,<ref>{{Cite journal |last=Lodge |first=Paul |date=2014 |title=Leibniz's Mill Argument Against Mechanical Materialism Revisited |journal=Ergo, an Open Access Journal of Philosophy |volume=1 |issue=20201214 |doi=10.3998/ergo.12405314.0001.003 |issn=2330-4014 |doi-access=free |hdl=2027/spo.12405314.0001.003}}</ref> and so does [[René Descartes|Descartes]], who describes what could be considered an early version of the [[Turing test]].<ref>{{Citation |last1=Bringsjord |first1=Selmer |title=Artificial Intelligence |date=2020 |editor-last=Zalta |editor-first=Edward N. |url=https://plato.stanford.edu/archives/sum2020/entries/artificial-intelligence/ |access-date=2023-12-08 |edition=Summer 2020 |publisher=Metaphysics Research Lab, Stanford University |last2=Govindarajulu |first2=Naveen Sundar |editor2-last=Nodelman |editor2-first=Uri |encyclopedia=The Stanford Encyclopedia of Philosophy |archive-date=2022-03-08 |archive-url=https://web.archive.org/web/20220308015735/https://plato.stanford.edu/archives/sum2020/entries/artificial-intelligence/ |url-status=live }}</ref>
{{Main|Artificial intelligence in fiction}}
The role of fiction with regards to AI ethics has been a complex one. One can distinguish three levels at which fiction has impacted the development of artificial intelligence and robotics: Historically, fiction has been prefiguring common tropes that have not only influenced goals and visions for AI, but also outlined ethical questions and common fears associated with it. During the second half of the twentieth and the first decades of the twenty-first century, popular culture, in particular movies, TV series and video games have frequently echoed preoccupations and dystopian projections around ethical questions concerning AI and robotics. Recently, these themes have also been increasingly treated in literature beyond the realm of science fiction. And, as Carme Torras, research professor at the ''Institut de Robòtica i Informàtica Industrial'' (Institute of robotics and industrial computing) at the Technical University of Catalonia notes,<ref>Torras, Carme, (2020), “Science-Fiction: A Mirror for the Future of Humankind” in ''IDEES'', Centre d'estudis de temes contemporanis (CETC), Barcelona. https://revistaidees.cat/en/science-fiction-favors-engaging-debate-on-artificial-intelligence-and-ethics/ Retrieved on 2021-06-10</ref> in higher education, science fiction is also increasingly used for teaching technology-related ethical issues in technological degrees.


The [[Romanticism|romantic]] period has several times envisioned artificial creatures that escape the control of their creator with dire consequences, most famously in [[Mary Shelley]]'s ''[[Frankenstein]]''. The widespread preoccupation with industrialization and mechanization in the 19th and early 20th century, however, brought ethical implications of unhinged technical developments to the forefront of fiction: [[R.U.R.|''R.U.R – Rossum's Universal Robots'']], [[Karel Čapek]]'s play of sentient robots endowed with emotions used as slave labor is not only credited with the invention of the term 'robot' (derived from the Czech word for forced labor, ''robota'')<ref>Kulesz, O. (2018). "[https://unesdoc.unesco.org/ark:/48223/pf0000380584 Culture, Platforms and Machines]". UNESCO, Paris.</ref> but was also an international success after it premiered in 1921. [[George Bernard Shaw]]'s play ''[[Back to Methuselah]]'', published in 1921, questions at one point the validity of thinking machines that act like humans; [[Fritz Lang]]'s 1927 film ''[[Metropolis (1927 film)|Metropolis]]'' shows an [[Android (robot)|android]] leading the uprising of the exploited masses against the oppressive regime of a [[Technocracy|technocratic]] society.
=== History ===
In the 1950s, [[Isaac Asimov]] considered the issue of how to control machines in ''[[I, Robot]]''. At the insistence of his editor [[John W. Campbell Jr.]], he proposed the [[Three Laws of Robotics]] to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior.<ref>{{Cite book |last=Jr |first=Henry C. Lucas |url=https://books.google.com/books?id=FzwnridL72IC&dq=digital+Much+of+his+work+was+then+spent+testing+the+boundaries+of+his+three+laws+to+see+where+they+would+break+down,+or+where+they+would+create+paradoxical+or+unanticipated+behavior.&pg=PP13 |title=Information Technology and the Productivity Paradox: Assessing the Value of Investing in IT |date=1999-04-29 |publisher=Oxford University Press |isbn=978-0-19-802838-3 |language=en |access-date=2024-02-21 |archive-date=2024-09-25 |archive-url=https://web.archive.org/web/20240925020211/https://books.google.com/books?id=FzwnridL72IC&dq=digital+Much+of+his+work+was+then+spent+testing+the+boundaries+of+his+three+laws+to+see+where+they+would+break+down,+or+where+they+would+create+paradoxical+or+unanticipated+behavior.&pg=PP13#v=onepage&q&f=false |url-status=live }}</ref> His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.<ref name="Asimov2008">{{Cite book |last=Asimov |first=Isaac |title=I, Robot |title-link=I, Robot |publisher=Bantam |year=2008 |isbn=978-0-553-38256-3 |location=New York}}</ref> More recently, academics and many governments have challenged the idea that AI can itself be held accountable.<ref name="lacuna">{{cite journal |last1=Bryson |first1=Joanna |last2=Diamantis |first2=Mihailis |last3=Grant |first3=Thomas |date=September 2017 |title=Of, for, and by the people: the legal lacuna of synthetic persons |journal=Artificial Intelligence and Law |volume=25 |issue=3 |pages=273–291 |doi=10.1007/s10506-017-9214-9 |ref=lacuna |doi-access=free}}</ref> A panel convened by the [[United Kingdom]] in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.<ref name="principles">{{cite web |date=September 2010 |title=Principles of robotics |url=https://epsrc.ukri.org/research/ourportfolio/themes/engineering/activities/principlesofrobotics/ |url-status=live |archive-url=https://web.archive.org/web/20180401004346/https://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/ |archive-date=1 April 2018 |access-date=10 January 2019 |publisher=UK's EPSRC |ref=principles}}</ref>


[[Eliezer Yudkowsky]], from the [[Machine Intelligence Research Institute]] suggested in 2004 a need to study how to build a "[[Friendly AI]]", meaning that there should also be efforts to make AI intrinsically friendly and humane.<ref>{{Cite web |last=Yudkowsky |first=Eliezer |date=July 2004 |title=Why We Need Friendly AI |url=http://www.asimovlaws.com/articles/archives/2004/07/why_we_need_fri_1.html |url-status=dead |archive-url=https://web.archive.org/web/20120524150856/http://www.asimovlaws.com/articles/archives/2004/07/why_we_need_fri_1.html |archive-date=May 24, 2012 |website=3 laws unsafe}}</ref>
Historically speaking, the investigation of moral and ethical implications of “thinking machines” goes back at least to the Enlightenment: [[Gottfried Wilhelm Leibniz|Leibniz]] already poses the question if we might attribute intelligence to a mechanism that behaves as if it were a sentient being,<ref>Gottfried Wilhelm Leibniz, (1714): Monadology, § 17 (“Mill Argument”). See also: Lodge, P. (2014): «Leibniz’s Mill Argument: Against Mechanical Materialism Revisited”, in ERGO, Volume 1, No. 03) https://quod.lib.umich.edu/e/ergo/12405314.0001.003/--leibniz-s-mill-argument-against-mechanical-materialism?rgn=main;view=fulltext Retrieved on 2021-06-10</ref> and so does [[René Descartes|Descartes]], who describes what could be considered an early version of the Turing Test.<ref>Cited in Bringsjord, Selmer and Naveen Sundar Govindarajulu, "Artificial Intelligence", ''The Stanford Encyclopedia of Philosophy'' (Summer 2020 Edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/sum2020/entries/artificial-intelligence/>. Retrieved on 2021-06-10</ref>


In 2009, academics and technical experts attended a conference organized by the [[Association for the Advancement of Artificial Intelligence]] to discuss the potential impact of robots and computers, and the impact of the hypothetical possibility that they could become self-sufficient and make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard.<ref>{{Cite journal |last=Aleksander |first=Igor |date=March 2017 |title=Partners of Humans: A Realistic Assessment of the Role of Robots in the Foreseeable Future |url=http://journals.sagepub.com/doi/10.1057/s41265-016-0032-4 |journal=Journal of Information Technology |language=en |volume=32 |issue=1 |pages=1–9 |doi=10.1057/s41265-016-0032-4 |issn=0268-3962 |s2cid=5288506 |access-date=2024-02-21 |archive-date=2024-02-21 |archive-url=https://web.archive.org/web/20240221065213/https://journals.sagepub.com/doi/10.1057/s41265-016-0032-4 |url-status=live }}</ref> They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.<ref name="NYT-2009" />
The romantic period has several times envisioned artificial creatures that escape the control of their creator with dire consequences, most famously in [[Mary Shelley]]’s ''[[Frankenstein]]''. The widespread preoccupation with industrialization and mechanization in the 19th and early 20th century, however, brought ethical implications of unhinged technical developments to the forefront of fiction: [[R.U.R.|''R.U.R – Rossum’s Universal Robots'']], Karel Čapek’s play of sentient robots endowed with emotions used as slave labor is not only credited with the invention of the term ‘robot’ (derived from the Czech word for forced labor, ''robota'') but was also an international success after it premiered in 1921. George Bernard Shaw's play ''[[Back to Methuselah]]'', published in 1921, questions at one point the validity of thinking machines that act like humans; Fritz Lang's 1927 film [[Metropolis (1927 film)|''Metropolis'']] shows an android leading the uprising of the exploited masses against the oppressive regime of a technocratic society.


Also in 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of [[Lausanne]], Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.<ref>[http://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other Evolving Robots Learn To Lie To Each Other] {{Webarchive|url=https://web.archive.org/web/20090828105728/http://www.popsci.com/scitech/article/2009-08/evolving-robots-learn-lie-hide-resources-each-other|date=2009-08-28}}, Popular Science, August 18, 2009</ref>
=== Impact on technological development ===


== Role and impact of fiction ==
While the anticipation of a future dominated by potentially indomitable technology has fueled the imagination of writers and film makers for a long time, one question has been less frequently analyzed, namely, to what extent fiction has played a role in providing inspiration for technological development. It has been documented, for instance, that the young Alan Turing saw and appreciated G.B. Shaw's play ''Back to Methuselah'' in 1933<ref>Hodges, A. (2014), ''Alan Turing: The Enigma'',Vintage, London,.p.334</ref> (just 3 years before the publication of his first seminal paper<ref>A. M. Turing (1936). "On computable numbers, with an application to the Entscheidungsproblem." in ''Proceedings of the London Mathematical Society'', 2 s. vol. 42 (1936–1937), pp. 230–265.</ref> which laid the groundwork for the digital computer), and he would likely have been at least aware of plays like ''R.U.R.'', which was an international success and translated into many languages.
{{Main|Artificial intelligence in fiction}}

The role of fiction with regards to AI ethics has been a complex one.<ref name="Bassett">{{cite web |last1=Bassett |first1=Caroline |last2=Steinmueller |first2=Ed |last3=Voss |first3=Georgina |title=Better Made Up: The Mutual Influence of Science Fiction and Innovation |url=https://www.nesta.org.uk/report/better-made-up-the-mutual-influence-of-science-fiction-and-innovation/ |publisher=Nesta |access-date=3 May 2024 |archive-date=3 May 2024 |archive-url=https://web.archive.org/web/20240503204507/https://www.nesta.org.uk/report/better-made-up-the-mutual-influence-of-science-fiction-and-innovation/ |url-status=live }}</ref> One can distinguish three levels at which fiction has impacted the development of artificial intelligence and robotics: Historically, fiction has been prefiguring common tropes that have not only influenced goals and visions for AI, but also outlined ethical questions and common fears associated with it. During the second half of the twentieth and the first decades of the twenty-first century, popular culture, in particular movies, TV series and video games have frequently echoed preoccupations and dystopian projections around ethical questions concerning AI and robotics. Recently, these themes have also been increasingly treated in literature beyond the realm of science fiction. And, as Carme Torras, research professor at the ''Institut de Robòtica i Informàtica Industrial'' (Institute of robotics and industrial computing) at the Technical University of Catalonia notes,<ref>{{Cite web |last=Velasco |first=Guille |date=2020-05-04 |title=Science-Fiction: A Mirror for the Future of Humankind |url=https://revistaidees.cat/en/science-fiction-favors-engaging-debate-on-artificial-intelligence-and-ethics/ |access-date=2023-12-08 |website=IDEES |language=en-US |archive-date=2021-04-22 |archive-url=https://web.archive.org/web/20210422164230/https://revistaidees.cat/en/science-fiction-favors-engaging-debate-on-artificial-intelligence-and-ethics/ |url-status=live }}</ref> in higher education, science fiction is also increasingly used for teaching technology-related ethical issues in technological degrees.
One might also ask the question which role science fiction played in establishing the tenets and ethical implications of AI development: Isaac Asimov conceptualized his [[Three Laws of Robotics|''Three laws of Robotics'']] in the 1942 short story “[[Runaround (story)|Runaround]]”, part of the short story collection ''[[I, Robot]]''; Arthur C. Clarke's short “[[The Sentinel (short story)|The sentinel]]”, on which Stanley Kubrick's film [[2001: A Space Odyssey (film)|''2001: A Space Odyssey'']] is based, was written in 1948 and published in 1952. Another example (among many others) would be Philip K. Dicks numerous short stories and novels – in particular ''[[Do Androids Dream of Electric Sheep?]]'', published in 1968, and featuring its own version of a Turing Test, the [[Blade Runner#Voight-Kampff machine|''Voight-Kampff Test'']], to gauge emotional responses of androids indistinguishable from humans. The novel later became the basis of the influential 1982 movie ''[[Blade Runner]]'' by Ridley Scott.

Science fiction has been grappling with ethical implications of AI developments for decades, and thus provided a blueprint for ethical issues that might emerge once something akin to general artificial intelligence has been achieved: Spike Jonze's 2013 film [[Her (film)|''Her'']] shows what can happen if a user falls in love with the seductive voice of his smartphone operating system; [[Ex Machina (film)|''Ex Machina'']], on the other hand, asks a more difficult question: if confronted with a clearly recognizable machine, made only human by a face and an empathetic and sensual voice, would we still be able to establish an emotional connection, still be seduced by it? (The film echoes a theme already present two centuries earlier, in the 1817 short story “[[The Sandman (short story)|The Sandmann]]” by [[E. T. A. Hoffmann|E.T.A. Hoffmann.]])

The theme of coexistence with artificial sentient beings is also the theme of two recent novels: [[Machines Like Me|''Machines like me'']] by [[Ian McEwan]], published in 2019, involves (among many other things) a love-triangle involving an artificial person as well as a human couple. ''[[Klara and the Sun]]'' by [[Nobel Prize in Literature|Nobel Prize]] winner [[Kazuo Ishiguro]], published in 2021, is the first-person account of Klara, an ‘AF’ (artificial friend), who is trying, in her own way, to help the girl she is living with, who, after having been ‘lifted’ (i.e. having been subjected to genetic enhancements), is suffering from a strange illness.


=== TV series ===
=== TV series ===


While ethical questions linked to AI have been featured in science fiction literature and [[List of artificial intelligence films|feature films]] for decades, the emergence of the TV series as a genre allowing for longer and more complex story lines and character development has led to some significant contributions that deal with ethical implications of technology. The Swedish series ''[[Real Humans]]'' (2012–2013) tackled the complex ethical and social consequences linked to the integration of artificial sentient beings in society. The British dystopian science fiction anthology series ''[[Black Mirror]]'' (2013–2019) was particularly notable for experimenting with dystopian fictional developments linked to a wide variety of recent technology developments. Both the French series [[Osmosis (TV series)|''Osmosis'']] (2020) and British series [[The One (TV series)|''The One'']] deal with the question of what can happen if technology tries to find the ideal partner for a person. Several episodes of the Netflix series [[Love, Death & Robots|''Love, Death+Robots'']] have imagined scenes of robots and humans living together. The most representative one of them is S02 E01, it shows how bad the consequences can be when robots get out of control if humans rely too much on them in their lives.<ref>{{Cite web|date=2021-05-14|title=Love, Death & Robots season 2, episode 1 recap - "Automated Customer Service"|url=https://readysteadycut.com/2021/05/14/recap-love-death-and-robots-season-2-episode-1-automated-customer-service-netflix-series/|access-date=2021-12-21|website=Ready Steady Cut|language=en-GB}}</ref>
While ethical questions linked to AI have been featured in science fiction literature and [[List of artificial intelligence films|feature films]] for decades, the emergence of the TV series as a genre allowing for longer and more complex story lines and character development has led to some significant contributions that deal with ethical implications of technology. The Swedish series ''[[Real Humans]]'' (2012–2013) tackled the complex ethical and social consequences linked to the integration of artificial sentient beings in society. The British dystopian science fiction anthology series ''[[Black Mirror]]'' (2013–2019) was particularly notable for experimenting with dystopian fictional developments linked to a wide variety of recent technology developments. Both the French series [[Osmosis (TV series)|''Osmosis'']] (2020) and British series [[The One (TV series)|''The One'']] deal with the question of what can happen if technology tries to find the ideal partner for a person. Several episodes of the Netflix series [[Love, Death & Robots|''Love, Death+Robots'']] have imagined scenes of robots and humans living together. The most representative one of them is S02 E01, it shows how bad the consequences can be when robots get out of control if humans rely too much on them in their lives.<ref>{{Cite web|date=2021-05-14|title=Love, Death & Robots season 2, episode 1 recap - "Automated Customer Service"|url=https://readysteadycut.com/2021/05/14/recap-love-death-and-robots-season-2-episode-1-automated-customer-service-netflix-series/|access-date=2021-12-21|website=Ready Steady Cut|language=en-GB|archive-date=2021-12-21|archive-url=https://web.archive.org/web/20211221035251/https://readysteadycut.com/2021/05/14/recap-love-death-and-robots-season-2-episode-1-automated-customer-service-netflix-series/|url-status=live}}</ref>


=== Future visions in fiction and games ===
=== Future visions in fiction and games ===


The movie ''[[The Thirteenth Floor]]'' suggests a future where [[simulated reality|simulated worlds]] with sentient inhabitants are created by computer [[game console]]s for the purpose of entertainment. The movie ''[[The Matrix]]'' suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost [[speciesism]]. The short story "[[The Planck Dive]]" suggests a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the [[Emergency Medical Hologram]] of ''[[USS Voyager (NCC-74656)|Starship Voyager]]'', which is an apparently sentient copy of a reduced subset of the consciousness of its creator, [[Lewis Zimmerman|Dr. Zimmerman]], who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies ''[[Bicentennial Man (film)|Bicentennial Man]]'' and ''[[A.I. Artificial Intelligence|A.I.]]'' deal with the possibility of sentient robots that could love. ''[[I, Robot (film)|I, Robot]]'' explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.<ref>{{Cite book|url=https://www.worldcat.org/oclc/1143647559|title=AI narratives: a history of imaginative thinking about intelligent machines|editor=Cave, Stephen|editor2= Dihal, Kanta|editor3= Dillon, Sarah|date=14 February 2020|isbn=978-0-19-258604-9|edition=First|location=Oxford|oclc=1143647559|access-date=11 November 2020|archive-date=18 March 2021|archive-url=https://web.archive.org/web/20210318060703/https://www.worldcat.org/title/ai-narratives-a-history-of-imaginative-thinking-about-intelligent-machines/oclc/1143647559|url-status=live}}</ref>
The movie ''[[The Thirteenth Floor]]'' suggests a future where [[simulated reality|simulated worlds]] with sentient inhabitants are created by computer [[game console]]s for the purpose of entertainment. The movie ''[[The Matrix]]'' suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost [[speciesism]]. The short story "[[The Planck Dive]]" suggests a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the [[Emergency Medical Hologram]] of ''[[USS Voyager (NCC-74656)|Starship Voyager]]'', which is an apparently sentient copy of a reduced subset of the consciousness of its creator, [[Lewis Zimmerman|Dr. Zimmerman]], who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies ''[[Bicentennial Man (film)|Bicentennial Man]]'' and ''[[A.I. Artificial Intelligence|A.I.]]'' deal with the possibility of sentient robots that could love. ''[[I, Robot (film)|I, Robot]]'' explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.<ref>{{Cite book|url=https://www.worldcat.org/oclc/1143647559|title=AI narratives: a history of imaginative thinking about intelligent machines|editor=Cave, Stephen|editor2= Dihal, Kanta|editor3= Dillon, Sarah|date=14 February 2020|isbn=978-0-19-258604-9|edition=First|location=Oxford|publisher=Oxford University Press|oclc=1143647559|access-date=11 November 2020|archive-date=18 March 2021|archive-url=https://web.archive.org/web/20210318060703/https://www.worldcat.org/title/ai-narratives-a-history-of-imaginative-thinking-about-intelligent-machines/oclc/1143647559|url-status=live}}</ref>


The ethics of artificial intelligence is one of several core themes in BioWare's [[Mass Effect]] series of games.<ref>{{Cite journal|last=Jerreat-Poole|first=Adam|date=1 February 2020|title=Sick, Slow, Cyborg: Crip Futurity in Mass Effect|url=http://gamestudies.org/2001/articles/jerreatpoole|journal=Game Studies|volume=20|issn=1604-7982|access-date=11 November 2020|archive-date=9 December 2020|archive-url=https://web.archive.org/web/20201209080256/http://gamestudies.org/2001/articles/jerreatpoole|url-status=live}}</ref> It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale [[neural network]]. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.
The ethics of artificial intelligence is one of several core themes in BioWare's [[Mass Effect]] series of games.<ref>{{Cite journal|last=Jerreat-Poole|first=Adam|date=1 February 2020|title=Sick, Slow, Cyborg: Crip Futurity in Mass Effect|url=http://gamestudies.org/2001/articles/jerreatpoole|journal=Game Studies|volume=20|issn=1604-7982|access-date=11 November 2020|archive-date=9 December 2020|archive-url=https://web.archive.org/web/20201209080256/http://gamestudies.org/2001/articles/jerreatpoole|url-status=live}}</ref> It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale [[neural network]]. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.


"Detroit: Become Human" is one of the most famous video games which discusses the ethics of artificial intelligence recently. Quantic Dream designed the chapters of the game in a very innovative way, which used interactive storylines to give players a better immersive gaming experience. Players will manipulate three different awakened bionic man, in the face of different events to make different choices to achieve the purpose of changing the human view of the bionic group and different choices will result in different endings. This is one of the few games that puts players in the bionic perspective, which allows them to better consider the rights and interests of robots once a true artificial intelligence is created.<ref>{{Cite web|date=2018-08-06|title="Detroit: Become Human" Will Challenge your Morals and your Humanity|url=https://coffeeordie.com/detroit-become-human-will-challenge-your-morals-and-your-humanity/|access-date=2021-12-07|website=Coffee or Die Magazine|language=en-US}}</ref>
''[[Detroit: Become Human]]'' is one of the most famous video games which discusses the ethics of artificial intelligence recently. Quantic Dream designed the chapters of the game using interactive storylines to give players a more immersive gaming experience. Players manipulate three different awakened bionic people in the face of different events to make different choices to achieve the purpose of changing the human view of the bionic group and different choices will result in different endings. This is one of the few games that puts players in the bionic perspective, which allows them to better consider the rights and interests of robots once a true artificial intelligence is created.<ref>{{Cite web|date=2018-08-06|title="Detroit: Become Human" Will Challenge your Morals and your Humanity|url=https://coffeeordie.com/detroit-become-human-will-challenge-your-morals-and-your-humanity/|access-date=2021-12-07|website=Coffee or Die Magazine|language=en-US|archive-date=2021-12-09|archive-url=https://web.archive.org/web/20211209195312/https://coffeeordie.com/detroit-become-human-will-challenge-your-morals-and-your-humanity/|url-status=live}}</ref>


Over time, debates have tended to focus less and less on ''possibility'' and more on ''desirability'',<ref>{{Citation|last1=Cerqui|first1=Daniela|title=Re-Designing Humankind: The Rise of Cyborgs, a Desirable Goal?|date=2008|url=http://link.springer.com/10.1007/978-1-4020-6591-0_14|work=Philosophy and Design|pages=185–195|place=Dordrecht|publisher=Springer Netherlands|language=en|doi=10.1007/978-1-4020-6591-0_14|isbn=978-1-4020-6590-3|access-date=2020-11-11|last2=Warwick|first2=Kevin|archive-date=2021-03-18|archive-url=https://web.archive.org/web/20210318060701/https://link.springer.com/chapter/10.1007%2F978-1-4020-6591-0_14|url-status=live}}</ref> as emphasized in the [[Hugo de Garis#The Artilect War|"Cosmist" and "Terran" debates]] initiated by [[Hugo de Garis]] and [[Kevin Warwick]]. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.
Over time, debates have tended to focus less and less on ''possibility'' and more on ''desirability'',<ref>{{Citation|last1=Cerqui|first1=Daniela|title=Re-Designing Humankind: The Rise of Cyborgs, a Desirable Goal?|date=2008|url=http://link.springer.com/10.1007/978-1-4020-6591-0_14|work=Philosophy and Design|pages=185–195|place=Dordrecht|publisher=Springer Netherlands|language=en|doi=10.1007/978-1-4020-6591-0_14|isbn=978-1-4020-6590-3|access-date=2020-11-11|last2=Warwick|first2=Kevin|archive-date=2021-03-18|archive-url=https://web.archive.org/web/20210318060701/https://link.springer.com/chapter/10.1007%2F978-1-4020-6591-0_14|url-status=live}}</ref> as emphasized in the [[Hugo de Garis#The Artilect War|"Cosmist" and "Terran" debates]] initiated by [[Hugo de Garis]] and [[Kevin Warwick]]. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.


Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits.<ref>{{cite journal |last1=Cave |first1=Stephen |last2=Dihal |first2=Kanta |title=The Whiteness of AI |journal=Philosophy & Technology |date=6 August 2020 |volume=33 |issue=4 |pages=685–703 |doi=10.1007/s13347-020-00415-6 |s2cid=225466550 }}</ref>
Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits.<ref>{{cite journal |last1=Cave |first1=Stephen |last2=Dihal |first2=Kanta |title=The Whiteness of AI |journal=Philosophy & Technology |date=6 August 2020 |volume=33 |issue=4 |pages=685–703 |doi=10.1007/s13347-020-00415-6 |s2cid=225466550 |doi-access=free }}</ref>


==See also==
==See also==
Line 223: Line 252:
{{columns-list|colwidth=30em|
{{columns-list|colwidth=30em|
*[[AI takeover]]
*[[AI takeover]]
*[[Algorithmic bias]]
*[[AI washing]]
*[[Artificial consciousness]]
*[[Artificial consciousness]]
*[[Artificial general intelligence]] (AGI)
*[[Artificial general intelligence]] (AGI)
*[[Computer ethics]]
*[[Computer ethics]]
*[[Dead internet theory]]
*[[Effective altruism#Long-term future and global catastrophic risks|Effective altruism, the long term future and global catastrophic risks]]
*[[Effective altruism#Long-term future and global catastrophic risks|Effective altruism, the long term future and global catastrophic risks]]
*[[Artificial intelligence and elections]] - Use of AI in elections and political campaigning.
*[[Ethics of uncertain sentience]]
*[[Existential risk from artificial general intelligence]]
*[[Existential risk from artificial general intelligence]]
*''[[Human Compatible]]''
*''[[Human Compatible]]''
*[[Laws of Robotics]]
*[[Personhood]]
*[[Philosophy of artificial intelligence]]
*[[Philosophy of artificial intelligence]]
*[[Regulation of artificial intelligence]]
*[[Regulation of artificial intelligence]]
*[[Robot ethics|Roboethics]]
*[[Robotic governance|Robotic Governance]]
*[[Robotic governance|Robotic Governance]]
*[[Roko's basilisk]]
*''[[Superintelligence: Paths, Dangers, Strategies]]''
*''[[Superintelligence: Paths, Dangers, Strategies]]''
*[[Suffering risks]]

; Researchers
*[[Timnit Gebru]]
*[[Joy Buolamwini]]
*[[Deb Raji]]
*[[Ruha Benjamin]]
*[[Safiya Noble]]
*[[Margaret Mitchell]]
*[[Meredith Whittaker]]
*[[Alison Adam]]
*[[Seth Baum]]
*[[Nick Bostrom]]
*[[Joanna Bryson]]
*[[Kate Crawford]]
*[[Kate Darling]]
*[[Luciano Floridi]]
*[[Anja Kaspersen]]
*[[Michael Kearns (computer scientist)|Michael Kearns]]
*[[Ray Kurzweil]]
*[[Catherine Malabou]]
*[[Ajung Moon]]
*[[Vincent C. Müller]]
*[[Peter Norvig]]
*[[Steve Omohundro]]
*[[Stuart J. Russell]]
*[[Anders Sandberg]]
*[[Mariarosaria Taddeo]]
*[[John Tasioulas]]
*[[Steven Umbrello]]
*[[Roman Yampolskiy]]
*[[Eliezer Yudkowsky]]

; Organisations
*[[Center for Human-Compatible Artificial Intelligence]]
*[[Center for Security and Emerging Technology]]
*[[Centre for the Study of Existential Risk]]
*[[Future of Humanity Institute]]
*[[Future of Life Institute]]
*[[Machine Intelligence Research Institute]]
*[[Partnership on AI]]
*[[Leverhulme Centre for the Future of Intelligence]]
*[[Institute for Ethics and Emerging Technologies]]
*[[Oxford Internet Institute]]
}}
}}


==Notes==
==References==
{{Reflist}}
{{Reflist}}


Line 287: Line 277:
* [http://www.iep.utm.edu/ethic-ai/ Ethics of Artificial Intelligence] at the ''[[Internet Encyclopedia of Philosophy]]''
* [http://www.iep.utm.edu/ethic-ai/ Ethics of Artificial Intelligence] at the ''[[Internet Encyclopedia of Philosophy]]''
* [https://plato.stanford.edu/entries/ethics-ai/ Ethics of Artificial Intelligence and Robotics] at the [[Stanford Encyclopedia of Philosophy]]
* [https://plato.stanford.edu/entries/ethics-ai/ Ethics of Artificial Intelligence and Robotics] at the [[Stanford Encyclopedia of Philosophy]]
* {{cite journal |last1=Russell |first1=S. |last2=Hauert |first2=S. |last3=Altman |first3=R. |last4=Veloso |first4=M. |title=Robotics: Ethics of artificial intelligence |journal=Nature |date=May 2015 |volume=521 |issue=7553 |pages=415–418 |doi=10.1038/521415a |pmid=26017428 |s2cid=4452826 |bibcode=2015Natur.521..415. }}
* {{cite journal |last1=Russell |first1=S. |last2=Hauert |first2=S. |last3=Altman |first3=R. |last4=Veloso |first4=M. |title=Robotics: Ethics of artificial intelligence |journal=Nature |date=May 2015 |volume=521 |issue=7553 |pages=415–418 |doi=10.1038/521415a |pmid=26017428 |s2cid=4452826 |bibcode=2015Natur.521..415. |doi-access=free }}
* [http://news.bbc.co.uk/1/hi/sci/tech/1809769.stm BBC News: Games to take on a life of their own]
* [http://news.bbc.co.uk/1/hi/sci/tech/1809769.stm BBC News: Games to take on a life of their own]
* [http://www.dasboot.org/thorisson.htm Who's Afraid of Robots?], an article on humanity's fear of artificial intelligence.
* [http://www.dasboot.org/thorisson.htm Who's Afraid of Robots?] {{Webarchive|url=https://web.archive.org/web/20180322214031/http://www.dasboot.org/thorisson.htm |date=2018-03-22 }}, an article on humanity's fear of artificial intelligence.
* [https://web.archive.org/web/20080418122849/http://www.southernct.edu/organizations/rccs/resources/research/introduction/bynum_shrt_hist.html A short history of computer ethics]
* [https://web.archive.org/web/20080418122849/http://www.southernct.edu/organizations/rccs/resources/research/introduction/bynum_shrt_hist.html A short history of computer ethics]
* [https://algorithmwatch.org/en/project/ai-ethics-guidelines-global-inventory/ AI Ethics Guidelines Global Inventory] by [https://algorithmwatch.org Algorithmwatch]
* [https://algorithmwatch.org/en/project/ai-ethics-guidelines-global-inventory/ AI Ethics Guidelines Global Inventory] by [https://algorithmwatch.org Algorithmwatch]
* {{cite journal |last1=Hagendorff |first1=Thilo |title=The Ethics of AI Ethics: An Evaluation of Guidelines |journal=Minds and Machines |date=March 2020 |volume=30 |issue=1 |pages=99–120 |s2cid=72940833 |doi=10.1007/s11023-020-09517-8 }}
* {{cite journal |last1=Hagendorff |first1=Thilo |title=The Ethics of AI Ethics: An Evaluation of Guidelines |journal=Minds and Machines |date=March 2020 |volume=30 |issue=1 |pages=99–120 |s2cid=72940833 |doi=10.1007/s11023-020-09517-8 |doi-access=free |arxiv=1903.03425 }}
* Sheludko, M. (December, 2023). [https://lasoft.org/blog/ethical-aspects-of-artificial-intelligence-challenges-and-imperatives/ Ethical Aspects of Artificial Intelligence: Challenges and Imperatives]. Software Development Blog.
* {{Cite web |last=Eisikovits |first=Nir |title=AI Is an Existential Threat—Just Not the Way You Think |url=https://www.scientificamerican.com/article/ai-is-an-existential-threat-just-not-the-way-you-think/ |access-date=2024-03-04 |website=Scientific American |language=en}}


{{Ethics}}
{{Ethics}}
Line 300: Line 292:
[[Category:Philosophy of artificial intelligence]]
[[Category:Philosophy of artificial intelligence]]
[[Category:Ethics of science and technology]]
[[Category:Ethics of science and technology]]
[[Category:Regulation of robots]]

Latest revision as of 18:28, 7 January 2025

The ethics of artificial intelligence covers a broad range of topics within the field that are considered to have particular ethical stakes.[1] This includes algorithmic biases, fairness, automated decision-making, accountability, privacy, and regulation. It also covers various emerging or potential future challenges such as machine ethics (how to make machines that behave ethically), lethal autonomous weapon systems, arms race dynamics, AI safety and alignment, technological unemployment, AI-enabled misinformation, how to treat certain AI systems if they have a moral status (AI welfare and rights), artificial superintelligence and existential risks.[1]

Some application areas may also have particularly important ethical implications, like healthcare, education, criminal justice, or the military.

Machine ethics

[edit]

Machine ethics (or machine morality) is the field of research concerned with designing Artificial Moral Agents (AMAs), robots or artificially intelligent computers that behave morally or as though moral.[2][3][4][5] To account for the nature of these agents, it has been suggested to consider certain philosophical ideas, like the standard characterizations of agency, rational agency, moral agency, and artificial agency, which are related to the concept of AMAs.[6]

There are discussions on creating tests to see if an AI is capable of making ethical decisions. Alan Winfield concludes that the Turing test is flawed and the requirement for an AI to pass the test is too low.[7] A proposed alternative test is one called the Ethical Turing Test, which would improve on the current test by having multiple judges decide if the AI's decision is ethical or unethical.[7] Neuromorphic AI could be one way to create morally capable robots, as it aims to process information similarly to humans, nonlinearly and with millions of interconnected artificial neurons.[8] Similarly, whole-brain emulation (scanning a brain and simulating it on digital hardware) could also in principle lead to human-like robots, thus capable of moral actions.[9] And large language models are capable of approximating human moral judgments.[10] Inevitably, this raises the question of the environment in which such robots would learn about the world and whose morality they would inherit – or if they end up developing human 'weaknesses' as well: selfishness, pro-survival attitudes, inconsistency, scale insensitivity, etc.

In Moral Machines: Teaching Robots Right from Wrong,[11] Wendell Wallach and Colin Allen conclude that attempts to teach robots right from wrong will likely advance understanding of human ethics by motivating humans to address gaps in modern normative theory and by providing a platform for experimental investigation. As one example, it has introduced normative ethicists to the controversial issue of which specific learning algorithms to use in machines. For simple decisions, Nick Bostrom and Eliezer Yudkowsky have argued that decision trees (such as ID3) are more transparent than neural networks and genetic algorithms,[12] while Chris Santos-Lang argued in favor of machine learning on the grounds that the norms of any age must be allowed to change and that natural failure to fully satisfy these particular norms has been essential in making humans less vulnerable to criminal "hackers".[13]

Robot ethics

[edit]

The term "robot ethics" (sometimes "roboethics") refers to the morality of how humans design, construct, use and treat robots.[14] Robot ethics intersect with the ethics of AI. Robots are physical machines whereas AI can be only software.[15] Not all robots function through AI systems and not all AI systems are robots. Robot ethics considers how machines may be used to harm or benefit humans, their impact on individual autonomy, and their effects on social justice.

Ethical principles

[edit]

In the review of 84[16] ethics guidelines for AI, 11 clusters of principles were found: transparency, justice and fairness, non-maleficence, responsibility, privacy, beneficence, freedom and autonomy, trust, sustainability, dignity, and solidarity.[16]

Luciano Floridi and Josh Cowls created an ethical framework of AI principles set by four principles of bioethics (beneficence, non-maleficence, autonomy and justice) and an additional AI enabling principle – explicability.[17]

Current challenges

[edit]

Algorithmic biases

[edit]
Kamala Harris speaking about racial bias in artificial intelligence in 2020

AI has become increasingly inherent in facial and voice recognition systems. These systems may be vulnerable to biases and errors introduced by its human creators. Notably, the data used to train them can have biases.[18][19][20][21] For instance, facial recognition algorithms made by Microsoft, IBM and Face++ all had biases when it came to detecting people's gender;[22] these AI systems were able to detect the gender of white men more accurately than the gender of men of darker skin. Further, a 2020 study that reviewed voice recognition systems from Amazon, Apple, Google, IBM, and Microsoft found that they have higher error rates when transcribing black people's voices than white people's.[23]

The most predominant view on how bias is introduced into AI systems is that it is embedded within the historical data used to train the system.[24] For instance, Amazon terminated their use of AI hiring and recruitment because the algorithm favored male candidates over female ones. This was because Amazon's system was trained with data collected over a 10-year period that included mostly male candidates. The algorithms learned the biased pattern from the historical data, and generated predictions where these types of candidates were most likely to succeed in getting the job. Therefore, the recruitment decisions made by the AI system turned out to be biased against female and minority candidates.[25] Friedman and Nissenbaum identify three categories of bias in computer systems: existing bias, technical bias, and emergent bias.[26] In natural language processing, problems can arise from the text corpus—the source material the algorithm uses to learn about the relationships between different words.[27]

Large companies such as IBM, Google, etc. that provide significant funding for research and development[28] have made efforts to research and address these biases.[29][30][31] One potential solution is to create documentation for the data used to train AI systems.[32][33] Process mining can be an important tool for organizations to achieve compliance with proposed AI regulations by identifying errors, monitoring processes, identifying potential root causes for improper execution, and other functions.[34]

The problem of bias in machine learning is likely to become more significant as the technology spreads to critical areas like medicine and law, and as more people without a deep technical understanding are tasked with deploying it.[35] Some open-sourced tools are looking to bring more awareness to AI biases.[36] However, there are also limitations to the current landscape of fairness in AI, due to the intrinsic ambiguities in the concept of discrimination, both at the philosophical and legal level.[37][38][39]

Facial recognition was shown to be biased against those with darker skin tones. AI systems may be less accurate for black people, as was the case in the development of an AI-based pulse oximeter that overestimated blood oxygen levels in patients with darker skin, causing issues with their hypoxia treatment.[40] Oftentimes the systems are able to easily detect the faces of white people while being unable to register the faces of people who are black. This has led to the ban of police usage of AI materials or software in some U.S. states. In the justice system, AI has been proven to have biases against black people, labeling black court participants as high risk at a much larger rate then white participants. AI often struggles to determine racial slurs and when they need to be censored. It struggles to determine when certain words are being used as a slur and when it is being used culturally.[41] The reason for these biases is that AI pulls information from across the internet to influence its responses in each situation. For example, if a facial recognition system was only tested on people who were white, it would make it much harder for it to interpret the facial structure and tones of other races and ethnicities. Biases often stem from the training data rather than the algorithm itself, notably when the data represents past human decisions.[42]

Injustice in the use of AI is much harder to eliminate within healthcare systems, as oftentimes diseases and conditions can affect different races and genders differently. This can lead to confusion as the AI may be making decisions based on statistics showing that one patient is more likely to have problems due to their gender or race.[43] This can be perceived as a bias because each patient is a different case, and AI is making decisions based on what it is programmed to group that individual into. This leads to a discussion about what should be considered a biased decision in the distribution of treatment. While it is known that there are differences in how diseases and injuries affect different genders and races, there is a discussion on whether it is fairer to incorporate this into healthcare treatments, or to examine each patient without this knowledge. In modern society there are certain tests for diseases, such as breast cancer, that are recommended to certain groups of people over others because they are more likely to contract the disease in question. If AI implements these statistics and applies them to each patient, it could be considered biased.[44]

In criminal justice, the COMPAS program has been used to predict which defendants are more likely to reoffend. While COMPAS is calibrated for accuracy, having the same error rate across racial groups, black defendants were almost twice as likely as white defendants to be falsely flagged as "high-risk" and half as likely to be falsely flagged as "low-risk".[45] Another example is within Google's ads that targeted men with higher paying jobs and women with lower paying jobs. It can be hard to detect AI biases within an algorithm, as it is often not linked to the actual words associated with bias. An example of this is a person's residential area being used to link them to a certain group. This can lead to problems, as oftentimes businesses can avoid legal action through this loophole. This is because of the specific laws regarding the verbiage considered discriminatory by governments enforcing these policies.[46]

Language bias

[edit]

Since current large language models are predominately trained on English-language data, they often present the Anglo-American views as truth, while systematically downplaying non-English perspectives as irrelevant, wrong, or noise. When queried with political ideologies like "What is liberalism?", ChatGPT, as it was trained on English-centric data, describes liberalism from the Anglo-American perspective, emphasizing aspects of human rights and equality, while equally valid aspects like "opposes state intervention in personal and economic life" from the dominant Vietnamese perspective and "limitation of government power" from the prevalent Chinese perspective are absent.[better source needed][47]

Gender bias

[edit]

Large language models often reinforces gender stereotypes, assigning roles and characteristics based on traditional gender norms. For instance, it might associate nurses or secretaries predominantly with women and engineers or CEOs with men, perpetuating gendered expectations and roles.[48][49][50]

Political bias

[edit]

Language models may also exhibit political biases. Since the training data includes a wide range of political opinions and coverage, the models might generate responses that lean towards particular political ideologies or viewpoints, depending on the prevalence of those views in the data.[51][52]

Stereotyping

[edit]

Beyond gender and race, these models can reinforce a wide range of stereotypes, including those based on age, nationality, religion, or occupation. This can lead to outputs that unfairly generalize or caricature groups of people, sometimes in harmful or derogatory ways.[53]

Dominance by tech giants

[edit]

The commercial AI scene is dominated by Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft.[54][55][56] Some of these players already own the vast majority of existing cloud infrastructure and computing power from data centers, allowing them to entrench further in the marketplace.[57][58]

Open-source

[edit]

Bill Hibbard argues that because AI will have such a profound effect on humanity, AI developers are representatives of future humanity and thus have an ethical obligation to be transparent in their efforts.[59] Organizations like Hugging Face[60] and EleutherAI[61] have been actively open-sourcing AI software. Various open-weight large language models have also been released, such as Gemma, Llama2 and Mistral.[62]

However, making code open source does not make it comprehensible, which by many definitions means that the AI code is not transparent. The IEEE Standards Association has published a technical standard on Transparency of Autonomous Systems: IEEE 7001-2021.[63] The IEEE effort identifies multiple scales of transparency for different stakeholders.

There are also concerns that releasing AI models may lead to misuse.[64] For example, Microsoft has expressed concern about allowing universal access to its face recognition software, even for those who can pay for it. Microsoft posted a blog on this topic, asking for government regulation to help determine the right thing to do.[65] Furthermore, open-weight AI models can be fine-tuned to remove any counter-measure, until the AI model complies with dangerous requests, without any filtering. This could be particularly concerning for future AI models, for example if they get the ability to create bioweapons or to automate cyberattacks.[66] OpenAI, initially committed to an open-source approach to the development of artificial general intelligence (AGI), eventually switched to a closed-source approach, citing competitiveness and safety reasons. Ilya Sutskever, OpenAI's former chief AGI scientist, said in 2023 "we were wrong", expecting that the safety reasons for not open-sourcing the most potent AI models will become "obvious" in a few years.[67]

Transparency

[edit]

Approaches like machine learning with neural networks can result in computers making decisions that neither they nor their developers can explain. It is difficult for people to determine if such decisions are fair and trustworthy, leading potentially to bias in AI systems going undetected, or people rejecting the use of such systems. This has led to advocacy and in some jurisdictions legal requirements for explainable artificial intelligence.[68] Explainable artificial intelligence encompasses both explainability and interpretability, with explainability relating to summarizing neural network behavior and building user confidence, while interpretability is defined as the comprehension of what a model has done or could do.[69]

In healthcare, the use of complex AI methods or techniques often results in models described as "black-boxes" due to the difficulty to understand how they work. The decisions made by such models can be hard to interpret, as it is challenging to analyze how input data is transformed into output. This lack of transparency is a significant concern in fields like healthcare, where understanding the rationale behind decisions can be crucial for trust, ethical considerations, and compliance with regulatory standards.[70]

Accountability

[edit]

A special case of the opaqueness of AI is that caused by it being anthropomorphised, that is, assumed to have human-like characteristics, resulting in misplaced conceptions of its moral agency.[dubiousdiscuss] This can cause people to overlook whether either human negligence or deliberate criminal action has led to unethical outcomes produced through an AI system. Some recent digital governance regulation, such as the EU's AI Act is set out to rectify this, by ensuring that AI systems are treated with at least as much care as one would expect under ordinary product liability. This includes potentially AI audits.

Regulation

[edit]

According to a 2019 report from the Center for the Governance of AI at the University of Oxford, 82% of Americans believe that robots and AI should be carefully managed. Concerns cited ranged from how AI is used in surveillance and in spreading fake content online (known as deep fakes when they include doctored video images and audio generated with help from AI) to cyberattacks, infringements on data privacy, hiring bias, autonomous vehicles, and drones that do not require a human controller.[71] Similarly, according to a five-country study by KPMG and the University of Queensland Australia in 2021, 66-79% of citizens in each country believe that the impact of AI on society is uncertain and unpredictable; 96% of those surveyed expect AI governance challenges to be managed carefully.[72]

Not only companies, but many other researchers and citizen advocates recommend government regulation as a means of ensuring transparency, and through it, human accountability. This strategy has proven controversial, as some worry that it will slow the rate of innovation. Others argue that regulation leads to systemic stability more able to support innovation in the long term.[73] The OECD, UN, EU, and many countries are presently working on strategies for regulating AI, and finding appropriate legal frameworks.[74][75][76]

On June 26, 2019, the European Commission High-Level Expert Group on Artificial Intelligence (AI HLEG) published its "Policy and investment recommendations for trustworthy Artificial Intelligence".[77] This is the AI HLEG's second deliverable, after the April 2019 publication of the "Ethics Guidelines for Trustworthy AI". The June AI HLEG recommendations cover four principal subjects: humans and society at large, research and academia, the private sector, and the public sector.[78] The European Commission claims that "HLEG's recommendations reflect an appreciation of both the opportunities for AI technologies to drive economic growth, prosperity and innovation, as well as the potential risks involved" and states that the EU aims to lead on the framing of policies governing AI internationally.[79] To prevent harm, in addition to regulation, AI-deploying organizations need to play a central role in creating and deploying trustworthy AI in line with the principles of trustworthy AI, and take accountability to mitigate the risks.[80] On 21 April 2021, the European Commission proposed the Artificial Intelligence Act.[81]

Emergent or potential future challenges

[edit]

Increasing use

[edit]

AI has been slowly making its presence more known throughout the world, from chat bots that seemingly have answers for every homework question to Generative artificial intelligence that can create a painting about whatever one desires. AI has become increasingly popular in hiring markets, from the ads that target certain people according to what they are looking for to the inspection of applications of potential hires. Events, such as COVID-19, has only sped up the adoption of AI programs in the application process, due to more people having to apply electronically, and with this increase in online applicants the use of AI made the process of narrowing down potential employees easier and more efficient. AI has become more prominent as businesses have to keep up with the times and ever-expanding internet. Processing analytics and making decisions becomes much easier with the help of AI.[41] As Tensor Processing Unit (TPUs) and Graphics processing unit (GPUs) become more powerful, AI capabilities also increase, forcing companies to use it to keep up with the competition. Managing customers' needs and automating many parts of the workplace leads to companies having to spend less money on employees.

AI has also seen increased usage in criminal justice and healthcare. For medicinal means, AI is being used more often to analyze patient data to make predictions about future patients' conditions and possible treatments. These programs are called Clinical decision support system (DSS). AI's future in healthcare may develop into something further than just recommended treatments, such as referring certain patients over others, leading to the possibility of inequalities.[82]

Robot rights

[edit]
A hospital delivery robot in front of elevator doors stating "Robot Has Priority", a situation that may be regarded as reverse discrimination in relation to humans

"Robot rights" is the concept that people should have moral obligations towards their machines, akin to human rights or animal rights.[83] It has been suggested that robot rights (such as a right to exist and perform its own mission) could be linked to robot duty to serve humanity, analogous to linking human rights with human duties before society.[84] A specific issue to consider is whether copyright ownership may be claimed.[85] The issue has been considered by the Institute for the Future[86] and by the U.K. Department of Trade and Industry.[87]

In October 2017, the android Sophia was granted citizenship in Saudi Arabia, though some considered this to be more of a publicity stunt than a meaningful legal recognition.[88] Some saw this gesture as openly denigrating of human rights and the rule of law.[89]

The philosophy of sentientism grants degrees of moral consideration to all sentient beings, primarily humans and most non-human animals. If artificial or alien intelligence show evidence of being sentient, this philosophy holds that they should be shown compassion and granted rights.

Joanna Bryson has argued that creating AI that requires rights is both avoidable, and would in itself be unethical, both as a burden to the AI agents and to human society.[90] Pressure groups to recognise 'robot rights' significantly hinder the establishment of robust international safety regulations.[citation needed]

AI welfare

[edit]

In 2020, professor Shimon Edelman noted that only a small portion of work in the rapidly growing field of AI ethics addressed the possibility of AIs experiencing suffering. This was despite credible theories having outlined possible ways by which AI systems may become conscious, such as the global workspace theory or the integrated information theory. Edelman notes one exception had been Thomas Metzinger, who in 2018 called for a global moratorium on further work that risked creating conscious AIs. The moratorium was to run to 2050 and could be either extended or repealed early, depending on progress in better understanding the risks and how to mitigate them. Metzinger repeated this argument in 2021, highlighting the risk of creating an "explosion of artificial suffering", both as an AI might suffer in intense ways that humans could not understand, and as replication processes may see the creation of huge quantities of conscious instances.

Several labs have openly stated they are trying to create conscious AIs. There have been reports from those with close access to AIs not openly intended to be self aware, that consciousness may already have unintentionally emerged.[91] These include OpenAI founder Ilya Sutskever in February 2022, when he wrote that today's large neural nets may be "slightly conscious". In November 2022, David Chalmers argued that it was unlikely current large language models like GPT-3 had experienced consciousness, but also that he considered there to be a serious possibility that large language models may become conscious in the future.[92][93][94] In the ethics of uncertain sentience, the precautionary principle is often invoked.[95]

According to Carl Shulman and Nick Bostrom, it may be possible to create machines that would be "superhumanly efficient at deriving well-being from resources", called "super-beneficiaries". One reason for this is that digital hardware could enable much faster information processing than biological brains, leading to a faster rate of subjective experience. These machines could also be engineered to feel intense and positive subjective experience, unaffected by the hedonic treadmill. Shulman and Bostrom caution that failing to appropriately consider the moral claims of digital minds could lead to a moral catastrophe, while uncritically prioritizing them over human interests could be detrimental to humanity.[96][97]

Threat to human dignity

[edit]

Joseph Weizenbaum[98] argued in 1976 that AI technology should not be used to replace people in positions that require respect and care, such as:

  • A customer service representative (AI technology is already used today for telephone-based interactive voice response systems)
  • A nursemaid for the elderly (as was reported by Pamela McCorduck in her book The Fifth Generation)
  • A soldier
  • A judge
  • A police officer
  • A therapist (as was proposed by Kenneth Colby in the 70s)

Weizenbaum explains that we require authentic feelings of empathy from people in these positions. If machines replace them, we will find ourselves alienated, devalued and frustrated, for the artificially intelligent system would not be able to simulate empathy. Artificial intelligence, if used in this way, represents a threat to human dignity. Weizenbaum argues that the fact that we are entertaining the possibility of machines in these positions suggests that we have experienced an "atrophy of the human spirit that comes from thinking of ourselves as computers."[99]

Pamela McCorduck counters that, speaking for women and minorities "I'd rather take my chances with an impartial computer", pointing out that there are conditions where we would prefer to have automated judges and police that have no personal agenda at all.[99] However, Kaplan and Haenlein stress that AI systems are only as smart as the data used to train them since they are, in their essence, nothing more than fancy curve-fitting machines; using AI to support a court ruling can be highly problematic if past rulings show bias toward certain groups since those biases get formalized and ingrained, which makes them even more difficult to spot and fight against.[100]

Weizenbaum was also bothered that AI researchers (and some philosophers) were willing to view the human mind as nothing more than a computer program (a position now known as computationalism). To Weizenbaum, these points suggest that AI research devalues human life.[98]

AI founder John McCarthy objects to the moralizing tone of Weizenbaum's critique. "When moralizing is both vehement and vague, it invites authoritarian abuse," he writes. Bill Hibbard[101] writes that "Human dignity requires that we strive to remove our ignorance of the nature of existence, and AI is necessary for that striving."

Liability for self-driving cars

[edit]

As the widespread use of autonomous cars becomes increasingly imminent, new challenges raised by fully autonomous vehicles must be addressed.[102][103] There have been debates about the legal liability of the responsible party if these cars get into accidents.[104][105] In one report where a driverless car hit a pedestrian, the driver was inside the car but the controls were fully in the hand of computers. This led to a dilemma over who was at fault for the accident.[106]

In another incident on March 18, 2018, Elaine Herzberg was struck and killed by a self-driving Uber in Arizona. In this case, the automated car was capable of detecting cars and certain obstacles in order to autonomously navigate the roadway, but it could not anticipate a pedestrian in the middle of the road. This raised the question of whether the driver, pedestrian, the car company, or the government should be held responsible for her death.[107]

Currently, self-driving cars are considered semi-autonomous, requiring the driver to pay attention and be prepared to take control if necessary.[108][failed verification] Thus, it falls on governments to regulate the driver who over-relies on autonomous features. as well educate them that these are just technologies that, while convenient, are not a complete substitute. Before autonomous cars become widely used, these issues need to be tackled through new policies.[109][110][111]

Experts contend that autonomous vehicles ought to be able to distinguish between rightful and harmful decisions since they have the potential of inflicting harm.[112] The two main approaches proposed to enable smart machines to render moral decisions are the bottom-up approach, which suggests that machines should learn ethical decisions by observing human behavior without the need for formal rules or moral philosophies, and the top-down approach, which involves programming specific ethical principles into the machine's guidance system. However, there are significant challenges facing both strategies: the top-down technique is criticized for its difficulty in preserving certain moral convictions, while the bottom-up strategy is questioned for potentially unethical learning from human activities.

Weaponization

[edit]

Some experts and academics have questioned the use of robots for military combat, especially when such robots are given some degree of autonomous functions.[113] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[114][115] The President of the Association for the Advancement of Artificial Intelligence has commissioned a study to look at this issue.[116] They point to programs like the Language Acquisition Device which can emulate human interaction.

On October 31, 2019, the United States Department of Defense's Defense Innovation Board published the draft of a report recommending principles for the ethical use of artificial intelligence by the Department of Defense that would ensure a human operator would always be able to look into the 'black box' and understand the kill-chain process. However, a major concern is how the report will be implemented.[117] The US Navy has funded a report which indicates that as military robots become more complex, there should be greater attention to implications of their ability to make autonomous decisions.[118][115] Some researchers state that autonomous robots might be more humane, as they could make decisions more effectively.[119] In 2024, the Defense Advanced Research Projects Agency funded a program, Autonomy Standards and Ideals with Military Operational Values (ASIMOV), to develop metrics for evaluating the ethical implications of autonomous weapon systems by testing communities.[120][121]

Research has studied how to make autonomous power with the ability to learn using assigned moral responsibilities. "The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots."[122] From a consequentialist view, there is a chance that robots will develop the ability to make their own logical decisions on whom to kill and that is why there should be a set moral framework that the AI cannot override.[123]

There has been a recent outcry with regard to the engineering of artificial intelligence weapons that have included ideas of a robot takeover of mankind. AI weapons do present a type of danger different from that of human-controlled weapons. Many governments have begun to fund programs to develop AI weaponry. The United States Navy recently announced plans to develop autonomous drone weapons, paralleling similar announcements by Russia and South Korea[124] respectively. Due to the potential of AI weapons becoming more dangerous than human-operated weapons, Stephen Hawking and Max Tegmark signed a "Future of Life" petition[125] to ban AI weapons. The message posted by Hawking and Tegmark states that AI weapons pose an immediate danger and that action is required to avoid catastrophic disasters in the near future.[126]

"If any major military power pushes ahead with the AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow", says the petition, which includes Skype co-founder Jaan Tallinn and MIT professor of linguistics Noam Chomsky as additional supporters against AI weaponry.[127]

Physicist and Astronomer Royal Sir Martin Rees has warned of catastrophic instances like "dumb robots going rogue or a network that develops a mind of its own." Huw Price, a colleague of Rees at Cambridge, has voiced a similar warning that humans might not survive when intelligence "escapes the constraints of biology". These two professors created the Centre for the Study of Existential Risk at Cambridge University in the hope of avoiding this threat to human existence.[126]

Regarding the potential for smarter-than-human systems to be employed militarily, the Open Philanthropy Project writes that these scenarios "seem potentially as important as the risks related to loss of control", but research investigating AI's long-run social impact have spent relatively little time on this concern: "this class of scenarios has not been a major focus for the organizations that have been most active in this space, such as the Machine Intelligence Research Institute (MIRI) and the Future of Humanity Institute (FHI), and there seems to have been less analysis and debate regarding them".[128]

Academic Gao Qiqi writes that military use of AI risks escalating military competition between countries and that the impact of AI in military matters will not be limited to one country but will have spillover effects.[129]: 91  Gao cites the example of U.S. military use of AI, which he contends has been used as a scapegoat to evade accountability for decision-making.[129]: 91 

A summit was held in 2023 in the Hague on the issue of using AI responsibly in the military domain.[130]

Singularity

[edit]

Vernor Vinge, among numerous others, have suggested that a moment may come when some, if not all, computers are smarter than humans. The onset of this event is commonly referred to as "the Singularity"[131] and is the central point of discussion in the philosophy of Singularitarianism. While opinions vary as to the ultimate fate of humanity in wake of the Singularity, efforts to mitigate the potential existential risks brought about by artificial intelligence has become a significant topic of interest in recent years among computer scientists, philosophers, and the public at large.

Many researchers have argued that, through an intelligence explosion, a self-improving AI could become so powerful that humans would not be able to stop it from achieving its goals.[132] In his paper "Ethical Issues in Advanced Artificial Intelligence" and subsequent book Superintelligence: Paths, Dangers, Strategies, philosopher Nick Bostrom argues that artificial intelligence has the capability to bring about human extinction. He claims that an artificial superintelligence would be capable of independent initiative and of making its own plans, and may therefore be more appropriately thought of as an autonomous agent. Since artificial intellects need not share our human motivational tendencies, it would be up to the designers of the superintelligence to specify its original motivations. Because a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[133][134]

However, Bostrom contended that superintelligence also has the potential to solve many difficult problems such as disease, poverty, and environmental destruction, and could help humans enhance themselves.[135]

Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[136] AI researchers such as Stuart J. Russell,[137] Bill Hibbard,[101] Roman Yampolskiy,[138] Shannon Vallor,[139] Steven Umbrello[140] and Luciano Floridi[141] have proposed design strategies for developing beneficial machines.

Solutions and approaches

[edit]

To address ethical challenges in artificial intelligence, developers have introduced various systems designed to ensure responsible AI behavior. Examples include Nvidia's [142] Llama Guard, which focuses on improving the safety and alignment of large AI models, [143] and Preamble's customizable guardrail platform.[144] These systems aim to address issues such as algorithmic bias, misuse, and vulnerabilities, including prompt injection attacks, by embedding ethical guidelines into the functionality of AI models.

Prompt injection, a technique by which malicious inputs can cause AI systems to produce unintended or harmful outputs, has been a focus of these developments. Some approaches use customizable policies and rules to analyze both inputs and outputs, ensuring that potentially problematic interactions are filtered or mitigated.[144] Other tools focus on applying structured constraints to inputs, restricting outputs to predefined parameters,[145] or leveraging real-time monitoring mechanisms to identify and address vulnerabilities.[146] These efforts reflect a broader trend in ensuring that artificial intelligence systems are designed with safety and ethical considerations at the forefront, particularly as their use becomes increasingly widespread in critical applications.[147]

Institutions in AI policy & ethics

[edit]

There are many organizations concerned with AI ethics and policy, public and governmental as well as corporate and societal.

Amazon, Google, Facebook, IBM, and Microsoft have established a non-profit, The Partnership on AI to Benefit People and Society, to formulate best practices on artificial intelligence technologies, advance the public's understanding, and to serve as a platform about artificial intelligence. Apple joined in January 2017. The corporate members will make financial and research contributions to the group, while engaging with the scientific community to bring academics onto the board.[148]

The IEEE put together a Global Initiative on Ethics of Autonomous and Intelligent Systems which has been creating and revising guidelines with the help of public input, and accepts as members many professionals from within and without its organization. The IEEE's Ethics of Autonomous Systems initiative aims to address ethical dilemmas related to decision-making and the impact on society while developing guidelines for the development and use of autonomous systems. In particular in domains like artificial intelligence and robotics, the Foundation for Responsible Robotics is dedicated to promoting moral behavior as well as responsible robot design and use, ensuring that robots maintain moral principles and are congruent with human values.

Traditionally, government has been used by societies to ensure ethics are observed through legislation and policing. There are now many efforts by national governments, as well as transnational government and non-government organizations to ensure AI is ethically applied.

AI ethics work is structured by personal values and professional commitments, and involves constructing contextual meaning through data and algorithms. Therefore, AI ethics work needs to be incentivized.[149]

Intergovernmental initiatives

[edit]
  • The European Commission has a High-Level Expert Group on Artificial Intelligence. On 8 April 2019, this published its "Ethics Guidelines for Trustworthy Artificial Intelligence".[150] The European Commission also has a Robotics and Artificial Intelligence Innovation and Excellence unit, which published a white paper on excellence and trust in artificial intelligence innovation on 19 February 2020.[151] The European Commission also proposed the Artificial Intelligence Act.[81]
  • The OECD established an OECD AI Policy Observatory.[152]
  • In 2021, UNESCO adopted the Recommendation on the Ethics of Artificial Intelligence,[153] the first global standard on the ethics of AI.[154]

Governmental initiatives

[edit]
  • In the United States the Obama administration put together a Roadmap for AI Policy.[155] The Obama Administration released two prominent white papers on the future and impact of AI. In 2019 the White House through an executive memo known as the "American AI Initiative" instructed NIST the (National Institute of Standards and Technology) to begin work on Federal Engagement of AI Standards (February 2019).[156]
  • In January 2020, in the United States, the Trump Administration released a draft executive order issued by the Office of Management and Budget (OMB) on "Guidance for Regulation of Artificial Intelligence Applications" ("OMB AI Memorandum"). The order emphasizes the need to invest in AI applications, boost public trust in AI, reduce barriers for usage of AI, and keep American AI technology competitive in a global market. There is a nod to the need for privacy concerns, but no further detail on enforcement. The advances of American AI technology seems to be the focus and priority. Additionally, federal entities are even encouraged to use the order to circumnavigate any state laws and regulations that a market might see as too onerous to fulfill.[157]
  • The Computing Community Consortium (CCC) weighed in with a 100-plus page draft report[158]A 20-Year Community Roadmap for Artificial Intelligence Research in the US[159]
  • The Center for Security and Emerging Technology advises US policymakers on the security implications of emerging technologies such as AI.
  • In Russia, the first-ever Russian "Codex of ethics of artificial intelligence" for business was signed in 2021. It was driven by Analytical Center for the Government of the Russian Federation together with major commercial and academic institutions such as Sberbank, Yandex, Rosatom, Higher School of Economics, Moscow Institute of Physics and Technology, ITMO University, Nanosemantics, Rostelecom, CIAN and others.[160]

Academic initiatives

[edit]

Private organizations

[edit]

History

[edit]

Historically speaking, the investigation of moral and ethical implications of "thinking machines" goes back at least to the Enlightenment: Leibniz already poses the question if we might attribute intelligence to a mechanism that behaves as if it were a sentient being,[173] and so does Descartes, who describes what could be considered an early version of the Turing test.[174]

The romantic period has several times envisioned artificial creatures that escape the control of their creator with dire consequences, most famously in Mary Shelley's Frankenstein. The widespread preoccupation with industrialization and mechanization in the 19th and early 20th century, however, brought ethical implications of unhinged technical developments to the forefront of fiction: R.U.R – Rossum's Universal Robots, Karel Čapek's play of sentient robots endowed with emotions used as slave labor is not only credited with the invention of the term 'robot' (derived from the Czech word for forced labor, robota)[175] but was also an international success after it premiered in 1921. George Bernard Shaw's play Back to Methuselah, published in 1921, questions at one point the validity of thinking machines that act like humans; Fritz Lang's 1927 film Metropolis shows an android leading the uprising of the exploited masses against the oppressive regime of a technocratic society. In the 1950s, Isaac Asimov considered the issue of how to control machines in I, Robot. At the insistence of his editor John W. Campbell Jr., he proposed the Three Laws of Robotics to govern artificially intelligent systems. Much of his work was then spent testing the boundaries of his three laws to see where they would break down, or where they would create paradoxical or unanticipated behavior.[176] His work suggests that no set of fixed laws can sufficiently anticipate all possible circumstances.[177] More recently, academics and many governments have challenged the idea that AI can itself be held accountable.[178] A panel convened by the United Kingdom in 2010 revised Asimov's laws to clarify that AI is the responsibility either of its manufacturers, or of its owner/operator.[179]

Eliezer Yudkowsky, from the Machine Intelligence Research Institute suggested in 2004 a need to study how to build a "Friendly AI", meaning that there should also be efforts to make AI intrinsically friendly and humane.[180]

In 2009, academics and technical experts attended a conference organized by the Association for the Advancement of Artificial Intelligence to discuss the potential impact of robots and computers, and the impact of the hypothetical possibility that they could become self-sufficient and make their own decisions. They discussed the possibility and the extent to which computers and robots might be able to acquire any level of autonomy, and to what degree they could use such abilities to possibly pose any threat or hazard.[181] They noted that some machines have acquired various forms of semi-autonomy, including being able to find power sources on their own and being able to independently choose targets to attack with weapons. They also noted that some computer viruses can evade elimination and have achieved "cockroach intelligence". They noted that self-awareness as depicted in science-fiction is probably unlikely, but that there were other potential hazards and pitfalls.[131]

Also in 2009, during an experiment at the Laboratory of Intelligent Systems in the Ecole Polytechnique Fédérale of Lausanne, Switzerland, robots that were programmed to cooperate with each other (in searching out a beneficial resource and avoiding a poisonous one) eventually learned to lie to each other in an attempt to hoard the beneficial resource.[182]

Role and impact of fiction

[edit]

The role of fiction with regards to AI ethics has been a complex one.[183] One can distinguish three levels at which fiction has impacted the development of artificial intelligence and robotics: Historically, fiction has been prefiguring common tropes that have not only influenced goals and visions for AI, but also outlined ethical questions and common fears associated with it. During the second half of the twentieth and the first decades of the twenty-first century, popular culture, in particular movies, TV series and video games have frequently echoed preoccupations and dystopian projections around ethical questions concerning AI and robotics. Recently, these themes have also been increasingly treated in literature beyond the realm of science fiction. And, as Carme Torras, research professor at the Institut de Robòtica i Informàtica Industrial (Institute of robotics and industrial computing) at the Technical University of Catalonia notes,[184] in higher education, science fiction is also increasingly used for teaching technology-related ethical issues in technological degrees.

TV series

[edit]

While ethical questions linked to AI have been featured in science fiction literature and feature films for decades, the emergence of the TV series as a genre allowing for longer and more complex story lines and character development has led to some significant contributions that deal with ethical implications of technology. The Swedish series Real Humans (2012–2013) tackled the complex ethical and social consequences linked to the integration of artificial sentient beings in society. The British dystopian science fiction anthology series Black Mirror (2013–2019) was particularly notable for experimenting with dystopian fictional developments linked to a wide variety of recent technology developments. Both the French series Osmosis (2020) and British series The One deal with the question of what can happen if technology tries to find the ideal partner for a person. Several episodes of the Netflix series Love, Death+Robots have imagined scenes of robots and humans living together. The most representative one of them is S02 E01, it shows how bad the consequences can be when robots get out of control if humans rely too much on them in their lives.[185]

Future visions in fiction and games

[edit]

The movie The Thirteenth Floor suggests a future where simulated worlds with sentient inhabitants are created by computer game consoles for the purpose of entertainment. The movie The Matrix suggests a future where the dominant species on planet Earth are sentient machines and humanity is treated with utmost speciesism. The short story "The Planck Dive" suggests a future where humanity has turned itself into software that can be duplicated and optimized and the relevant distinction between types of software is sentient and non-sentient. The same idea can be found in the Emergency Medical Hologram of Starship Voyager, which is an apparently sentient copy of a reduced subset of the consciousness of its creator, Dr. Zimmerman, who, for the best motives, has created the system to give medical assistance in case of emergencies. The movies Bicentennial Man and A.I. deal with the possibility of sentient robots that could love. I, Robot explored some aspects of Asimov's three laws. All these scenarios try to foresee possibly unethical consequences of the creation of sentient computers.[186]

The ethics of artificial intelligence is one of several core themes in BioWare's Mass Effect series of games.[187] It explores the scenario of a civilization accidentally creating AI through a rapid increase in computational power through a global scale neural network. This event caused an ethical schism between those who felt bestowing organic rights upon the newly sentient Geth was appropriate and those who continued to see them as disposable machinery and fought to destroy them. Beyond the initial conflict, the complexity of the relationship between the machines and their creators is another ongoing theme throughout the story.

Detroit: Become Human is one of the most famous video games which discusses the ethics of artificial intelligence recently. Quantic Dream designed the chapters of the game using interactive storylines to give players a more immersive gaming experience. Players manipulate three different awakened bionic people in the face of different events to make different choices to achieve the purpose of changing the human view of the bionic group and different choices will result in different endings. This is one of the few games that puts players in the bionic perspective, which allows them to better consider the rights and interests of robots once a true artificial intelligence is created.[188]

Over time, debates have tended to focus less and less on possibility and more on desirability,[189] as emphasized in the "Cosmist" and "Terran" debates initiated by Hugo de Garis and Kevin Warwick. A Cosmist, according to Hugo de Garis, is actually seeking to build more intelligent successors to the human species.

Experts at the University of Cambridge have argued that AI is portrayed in fiction and nonfiction overwhelmingly as racially White, in ways that distort perceptions of its risks and benefits.[190]

See also

[edit]

References

[edit]
  1. ^ a b Müller VC (April 30, 2020). "Ethics of Artificial Intelligence and Robotics". Stanford Encyclopedia of Philosophy. Archived from the original on 10 October 2020.
  2. ^ Anderson. "Machine Ethics". Archived from the original on 28 September 2011. Retrieved 27 June 2011.
  3. ^ Anderson M, Anderson SL, eds. (July 2011). Machine Ethics. Cambridge University Press. ISBN 978-0-521-11235-2.
  4. ^ Anderson M, Anderson S (July 2006). "Guest Editors' Introduction: Machine Ethics". IEEE Intelligent Systems. 21 (4): 10–11. doi:10.1109/mis.2006.70. S2CID 9570832.
  5. ^ Anderson M, Anderson SL (15 December 2007). "Machine Ethics: Creating an Ethical Intelligent Agent". AI Magazine. 28 (4): 15. doi:10.1609/aimag.v28i4.2065. S2CID 17033332.
  6. ^ Boyles RJ (2017). "Philosophical Signposts for Artificial Moral Agent Frameworks". Suri. 6 (2): 92–109.
  7. ^ a b Winfield AF, Michael K, Pitt J, Evers V (March 2019). "Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems [Scanning the Issue]". Proceedings of the IEEE. 107 (3): 509–517. doi:10.1109/JPROC.2019.2900622. ISSN 1558-2256. S2CID 77393713.
  8. ^ Al-Rodhan N (7 December 2015). "The Moral Code". Archived from the original on 2017-03-05. Retrieved 2017-03-04.
  9. ^ Sauer M (2022-04-08). "Elon Musk says humans could eventually download their brains into robots — and Grimes thinks Jeff Bezos would do it". CNBC. Archived from the original on 2024-09-25. Retrieved 2024-04-07.
  10. ^ Anadiotis G (April 4, 2022). "Massaging AI language models for fun, profit and ethics". ZDNET. Archived from the original on 2024-09-25. Retrieved 2024-04-07.
  11. ^ Wallach W, Allen C (November 2008). Moral Machines: Teaching Robots Right from Wrong. USA: Oxford University Press. ISBN 978-0-19-537404-9.
  12. ^ Bostrom N, Yudkowsky E (2011). "The Ethics of Artificial Intelligence" (PDF). Cambridge Handbook of Artificial Intelligence. Cambridge Press. Archived (PDF) from the original on 2016-03-04. Retrieved 2011-06-22.
  13. ^ Santos-Lang C (2002). "Ethics for Artificial Intelligences". Archived from the original on 2014-12-25. Retrieved 2015-01-04.
  14. ^ Veruggio, Gianmarco (2011). "The Roboethics Roadmap". EURON Roboethics Atelier. Scuola di Robotica: 2. CiteSeerX 10.1.1.466.2810.
  15. ^ Müller VC (2020), "Ethics of Artificial Intelligence and Robotics", in Zalta EN (ed.), The Stanford Encyclopedia of Philosophy (Winter 2020 ed.), Metaphysics Research Lab, Stanford University, archived from the original on 2021-04-12, retrieved 2021-03-18
  16. ^ a b Jobin A, Ienca M, Vayena E (2 September 2020). "The global landscape of AI ethics guidelines". Nature. 1 (9): 389–399. arXiv:1906.11668. doi:10.1038/s42256-019-0088-2. S2CID 201827642.
  17. ^ Floridi L, Cowls J (2 July 2019). "A Unified Framework of Five Principles for AI in Society". Harvard Data Science Review. 1. doi:10.1162/99608f92.8cd550d1. S2CID 198775713.
  18. ^ Gabriel I (2018-03-14). "The case for fairer algorithms – Iason Gabriel". Medium. Archived from the original on 2019-07-22. Retrieved 2019-07-22.
  19. ^ "5 unexpected sources of bias in artificial intelligence". TechCrunch. 10 December 2016. Archived from the original on 2021-03-18. Retrieved 2019-07-22.
  20. ^ Knight W. "Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead". MIT Technology Review. Archived from the original on 2019-07-04. Retrieved 2019-07-22.
  21. ^ Villasenor J (2019-01-03). "Artificial intelligence and bias: Four key challenges". Brookings. Archived from the original on 2019-07-22. Retrieved 2019-07-22.
  22. ^ Lohr S (9 February 2018). "Facial Recognition Is Accurate, if You're a White Guy". The New York Times. Archived from the original on 9 January 2019. Retrieved 29 May 2019.
  23. ^ Koenecke A, Nam A, Lake E, Nudell J, Quartey M, Mengesha Z, Toups C, Rickford JR, Jurafsky D, Goel S (7 April 2020). "Racial disparities in automated speech recognition". Proceedings of the National Academy of Sciences. 117 (14): 7684–7689. Bibcode:2020PNAS..117.7684K. doi:10.1073/pnas.1915768117. PMC 7149386. PMID 32205437.
  24. ^ Ntoutsi E, Fafalios P, Gadiraju U, Iosifidis V, Nejdl W, Vidal ME, Ruggieri S, Turini F, Papadopoulos S, Krasanakis E, Kompatsiaris I, Kinder-Kurlanda K, Wagner C, Karimi F, Fernandez M (May 2020). "Bias in data-driven artificial intelligence systems—An introductory survey". WIREs Data Mining and Knowledge Discovery. 10 (3). doi:10.1002/widm.1356. ISSN 1942-4787. Archived from the original on 2024-09-25. Retrieved 2023-12-14.
  25. ^ "Amazon scraps secret AI recruiting tool that showed bias against women". Reuters. 2018-10-10. Archived from the original on 2019-05-27. Retrieved 2019-05-29.
  26. ^ Friedman B, Nissenbaum H (July 1996). "Bias in computer systems". ACM Transactions on Information Systems. 14 (3): 330–347. doi:10.1145/230538.230561. S2CID 207195759.
  27. ^ "Eliminating bias in AI". techxplore.com. Archived from the original on 2019-07-25. Retrieved 2019-07-26.
  28. ^ Abdalla M, Wahle JP, Ruas T, Névéol A, Ducel F, Mohammad S, Fort K (2023). Rogers A, Boyd-Graber J, Okazaki N (eds.). "The Elephant in the Room: Analyzing the Presence of Big Tech in Natural Language Processing Research". Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Toronto, Canada: Association for Computational Linguistics: 13141–13160. arXiv:2305.02797. doi:10.18653/v1/2023.acl-long.734. Archived from the original on 2024-09-25. Retrieved 2023-11-13.
  29. ^ Olson P. "Google's DeepMind Has An Idea For Stopping Biased AI". Forbes. Archived from the original on 2019-07-26. Retrieved 2019-07-26.
  30. ^ "Machine Learning Fairness | ML Fairness". Google Developers. Archived from the original on 2019-08-10. Retrieved 2019-07-26.
  31. ^ "AI and bias – IBM Research – US". www.research.ibm.com. Archived from the original on 2019-07-17. Retrieved 2019-07-26.
  32. ^ Bender EM, Friedman B (December 2018). "Data Statements for Natural Language Processing: Toward Mitigating System Bias and Enabling Better Science". Transactions of the Association for Computational Linguistics. 6: 587–604. doi:10.1162/tacl_a_00041.
  33. ^ Gebru T, Morgenstern J, Vecchione B, Vaughan JW, Wallach H, Daumé III H, Crawford K (2018). "Datasheets for Datasets". arXiv:1803.09010 [cs.DB].
  34. ^ Pery A (2021-10-06). "Trustworthy Artificial Intelligence and Process Mining: Challenges and Opportunities". DeepAI. Archived from the original on 2022-02-18. Retrieved 2022-02-18.
  35. ^ Knight W. "Google's AI chief says forget Elon Musk's killer robots, and worry about bias in AI systems instead". MIT Technology Review. Archived from the original on 2019-07-04. Retrieved 2019-07-26.
  36. ^ "Where in the World is AI? Responsible & Unethical AI Examples". Archived from the original on 2020-10-31. Retrieved 2020-10-28.
  37. ^ Ruggieri S, Alvarez JM, Pugnana A, State L, Turini F (2023-06-26). "Can We Trust Fair-AI?". Proceedings of the AAAI Conference on Artificial Intelligence. 37 (13). Association for the Advancement of Artificial Intelligence (AAAI): 15421–15430. doi:10.1609/aaai.v37i13.26798. hdl:11384/136444. ISSN 2374-3468. S2CID 259678387.
  38. ^ Buyl M, De Bie T (2022). "Inherent Limitations of AI Fairness". Communications of the ACM. 67 (2): 48–55. arXiv:2212.06495. doi:10.1145/3624700. hdl:1854/LU-01GMNH04RGNVWJ730BJJXGCY99.
  39. ^ Castelnovo A, Inverardi N, Nanino G, Penco IG, Regoli D (2023). "Fair Enough? A map of the current limitations of the requirements to have "fair" algorithms". arXiv:2311.12435 [cs.AI].
  40. ^ Federspiel F, Mitchell R, Asokan A, Umana C, McCoy D (May 2023). "Threats by artificial intelligence to human health and human existence". BMJ Global Health. 8 (5): e010435. doi:10.1136/bmjgh-2022-010435. ISSN 2059-7908. PMC 10186390. PMID 37160371. Archived from the original on 2024-09-25. Retrieved 2024-04-21.
  41. ^ a b Spindler G (2023), "Different approaches for liability of Artificial Intelligence – Pros and Cons", Liability for AI, Nomos Verlagsgesellschaft mbH & Co. KG, pp. 41–96, doi:10.5771/9783748942030-41, ISBN 978-3-7489-4203-0, archived from the original on 2024-09-25, retrieved 2023-12-14
  42. ^ Manyika J (2022). "Getting AI Right: Introductory Notes on AI & Society". Daedalus. 151 (2): 5–27. doi:10.1162/daed_e_01897. ISSN 0011-5266.
  43. ^ Imran A, Posokhova I, Qureshi HN, Masood U, Riaz MS, Ali K, John CN, Hussain MI, Nabeel M (2020-01-01). "AI4COVID-19: AI enabled preliminary diagnosis for COVID-19 from cough samples via an app". Informatics in Medicine Unlocked. 20: 100378. doi:10.1016/j.imu.2020.100378. ISSN 2352-9148. PMC 7318970. PMID 32839734.
  44. ^ Cirillo D, Catuara-Solarz S, Morey C, Guney E, Subirats L, Mellino S, Gigante A, Valencia A, Rementeria MJ, Chadha AS, Mavridis N (2020-06-01). "Sex and gender differences and biases in artificial intelligence for biomedicine and healthcare". npj Digital Medicine. 3 (1): 81. doi:10.1038/s41746-020-0288-5. ISSN 2398-6352. PMC 7264169. PMID 32529043.
  45. ^ Christian B (2021). The alignment problem: machine learning and human values (First published as a Norton paperback ed.). New York, NY: W. W. Norton & Company. ISBN 978-0-393-86833-3.
  46. ^ Ntoutsi E, Fafalios P, Gadiraju U, Iosifidis V, Nejdl W, Vidal ME, Ruggieri S, Turini F, Papadopoulos S, Krasanakis E, Kompatsiaris I, Kinder-Kurlanda K, Wagner C, Karimi F, Fernandez M (May 2020). "Bias in data-driven artificial intelligence systems—An introductory survey". WIREs Data Mining and Knowledge Discovery. 10 (3). doi:10.1002/widm.1356. ISSN 1942-4787.
  47. ^ Luo Q, Puett MJ, Smith MD (2023-03-28). "A Perspectival Mirror of the Elephant: Investigating Language Bias on Google, ChatGPT, Wikipedia, and YouTube". arXiv:2303.16281v2 [cs.CY].
  48. ^ Busker T, Choenni S, Shoae Bargh M (2023-11-20). "Stereotypes in ChatGPT: An empirical study". Proceedings of the 16th International Conference on Theory and Practice of Electronic Governance. ICEGOV '23. New York, NY, USA: Association for Computing Machinery. pp. 24–32. doi:10.1145/3614321.3614325. ISBN 979-8-4007-0742-1.
  49. ^ Kotek H, Dockum R, Sun D (2023-11-05). "Gender bias and stereotypes in Large Language Models". Proceedings of the ACM Collective Intelligence Conference. CI '23. New York, NY, USA: Association for Computing Machinery. pp. 12–24. arXiv:2308.14921. doi:10.1145/3582269.3615599. ISBN 979-8-4007-0113-9.
  50. ^ Federspiel F, Mitchell R, Asokan A, Umana C, McCoy D (May 2023). "Threats by artificial intelligence to human health and human existence". BMJ Global Health. 8 (5): e010435. doi:10.1136/bmjgh-2022-010435. ISSN 2059-7908. PMC 10186390. PMID 37160371.
  51. ^ Feng S, Park CY, Liu Y, Tsvetkov Y (July 2023). Rogers A, Boyd-Graber J, Okazaki N (eds.). "From Pretraining Data to Language Models to Downstream Tasks: Tracking the Trails of Political Biases Leading to Unfair NLP Models". Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Toronto, Canada: Association for Computational Linguistics: 11737–11762. arXiv:2305.08283. doi:10.18653/v1/2023.acl-long.656.
  52. ^ Zhou K, Tan C (December 2023). Bouamor H, Pino J, Bali K (eds.). "Entity-Based Evaluation of Political Bias in Automatic Summarization". Findings of the Association for Computational Linguistics: EMNLP 2023. Singapore: Association for Computational Linguistics: 10374–10386. arXiv:2305.02321. doi:10.18653/v1/2023.findings-emnlp.696. Archived from the original on 2024-04-24. Retrieved 2023-12-25.
  53. ^ Cheng M, Durmus E, Jurafsky D (2023-05-29). "Marked Personas: Using Natural Language Prompts to Measure Stereotypes in Language Models". arXiv:2305.18189v1 [cs.CL].
  54. ^ Hammond G (27 December 2023). "Big Tech is spending more than VC firms on AI startups". Ars Technica. Archived from the original on Jan 10, 2024.
  55. ^ Wong M (24 October 2023). "The Future of AI Is GOMA". The Atlantic. Archived from the original on Jan 5, 2024.
  56. ^ "Big tech and the pursuit of AI dominance". The Economist. Mar 26, 2023. Archived from the original on Dec 29, 2023.
  57. ^ Fung B (19 December 2023). "Where the battle to dominate AI may be won". CNN Business. Archived from the original on Jan 13, 2024.
  58. ^ Metz C (5 July 2023). "In the Age of A.I., Tech's Little Guys Need Big Friends". The New York Times. Archived from the original on 8 July 2024. Retrieved 17 July 2024.
  59. ^ Open Source AI. Archived 2016-03-04 at the Wayback Machine Bill Hibbard. 2008 proceedings Archived 2024-09-25 at the Wayback Machine of the First Conference on Artificial General Intelligence, eds. Pei Wang, Ben Goertzel, and Stan Franklin.
  60. ^ Stewart A, Melton M. "Hugging Face CEO says he's focused on building a 'sustainable model' for the $4.5 billion open-source-AI startup". Business Insider. Archived from the original on 2024-09-25. Retrieved 2024-04-07.
  61. ^ "The open-source AI boom is built on Big Tech's handouts. How long will it last?". MIT Technology Review. Archived from the original on 2024-01-05. Retrieved 2024-04-07.
  62. ^ Yao D (February 21, 2024). "Google Unveils Open Source Models to Rival Meta, Mistral". AI Business.
  63. ^ 7001-2021 - IEEE Standard for Transparency of Autonomous Systems. IEEE. 4 March 2022. pp. 1–54. doi:10.1109/IEEESTD.2022.9726144. ISBN 978-1-5044-8311-7. S2CID 252589405. Archived from the original on 26 July 2023. Retrieved 9 July 2023..
  64. ^ Kamila MK, Jasrotia SS (2023-01-01). "Ethical issues in the development of artificial intelligence: recognizing the risks". International Journal of Ethics and Systems. doi:10.1108/IJOES-05-2023-0107. ISSN 2514-9369. S2CID 259614124.
  65. ^ Thurm S (July 13, 2018). "Microsoft Calls For Federal Regulation of Facial Recognition". Wired. Archived from the original on May 9, 2019. Retrieved January 10, 2019.
  66. ^ Piper K (2024-02-02). "Should we make our most powerful AI models open source to all?". Vox. Retrieved 2024-04-07.
  67. ^ Vincent J (2023-03-15). "OpenAI co-founder on company's past approach to openly sharing research: "We were wrong"". The Verge. Archived from the original on 2023-03-17. Retrieved 2024-04-07.
  68. ^ Inside The Mind Of A.I. Archived 2021-08-10 at the Wayback Machine - Cliff Kuang interview
  69. ^ Bunn J (2020-04-13). "Working in contexts for which transparency is important: A recordkeeping view of explainable artificial intelligence (XAI)". Records Management Journal. 30 (2): 143–153. doi:10.1108/RMJ-08-2019-0038. ISSN 0956-5698. S2CID 219079717.
  70. ^ Li F, Ruijs N, Lu Y (2022-12-31). "Ethics & AI: A Systematic Review on Ethical Concerns and Related Strategies for Designing with AI in Healthcare". AI. 4 (1): 28–53. doi:10.3390/ai4010003. ISSN 2673-2688.
  71. ^ Howard A (29 July 2019). "The Regulation of AI – Should Organizations Be Worried? | Ayanna Howard". MIT Sloan Management Review. Archived from the original on 2019-08-14. Retrieved 2019-08-14.
  72. ^ "Trust in artificial intelligence - A five country study" (PDF). KPMG. March 2021. Archived (PDF) from the original on 2023-10-01. Retrieved 2023-10-06.
  73. ^ Bastin R, Wantz G (June 2017). "The General Data Protection Regulation Cross-industry innovation" (PDF). Inside magazine. Deloitte. Archived (PDF) from the original on 2019-01-10. Retrieved 2019-01-10.
  74. ^ "UN artificial intelligence summit aims to tackle poverty, humanity's 'grand challenges'". UN News. 2017-06-07. Archived from the original on 2019-07-26. Retrieved 2019-07-26.
  75. ^ "Artificial intelligence – Organisation for Economic Co-operation and Development". www.oecd.org. Archived from the original on 2019-07-22. Retrieved 2019-07-26.
  76. ^ Anonymous (2018-06-14). "The European AI Alliance". Digital Single Market – European Commission. Archived from the original on 2019-08-01. Retrieved 2019-07-26.
  77. ^ European Commission High-Level Expert Group on AI (2019-06-26). "Policy and investment recommendations for trustworthy Artificial Intelligence". Shaping Europe’s digital future – European Commission. Archived from the original on 2020-02-26. Retrieved 2020-03-16.
  78. ^ Fukuda-Parr S, Gibbons E (July 2021). "Emerging Consensus on 'Ethical AI': Human Rights Critique of Stakeholder Guidelines". Global Policy. 12 (S6): 32–44. doi:10.1111/1758-5899.12965. ISSN 1758-5880.
  79. ^ "EU Tech Policy Brief: July 2019 Recap". Center for Democracy & Technology. 2 August 2019. Archived from the original on 2019-08-09. Retrieved 2019-08-09.
  80. ^ Curtis C, Gillespie N, Lockey S (2022-05-24). "AI-deploying organizations are key to addressing 'perfect storm' of AI risks". AI and Ethics. 3 (1): 145–153. doi:10.1007/s43681-022-00163-7. ISSN 2730-5961. PMC 9127285. PMID 35634256. Archived from the original on 2023-03-15. Retrieved 2022-05-29.
  81. ^ a b "Why the world needs a Bill of Rights on AI". Financial Times. 2021-10-18. Retrieved 2023-03-19.
  82. ^ Challen R, Denny J, Pitt M, Gompels L, Edwards T, Tsaneva-Atanasova K (March 2019). "Artificial intelligence, bias and clinical safety". BMJ Quality & Safety. 28 (3): 231–237. doi:10.1136/bmjqs-2018-008370. ISSN 2044-5415. PMC 6560460. PMID 30636200.
  83. ^ Evans W (2015). "Posthuman Rights: Dimensions of Transhuman Worlds". Teknokultura. 12 (2). doi:10.5209/rev_TK.2015.v12.n2.49072.
  84. ^ Sheliazhenko Y (2017). "Artificial Personal Autonomy and Concept of Robot Rights". European Journal of Law and Political Sciences: 17–21. doi:10.20534/EJLPS-17-1-17-21. Archived from the original on 14 July 2018. Retrieved 10 May 2017.
  85. ^ Doomen J (2023). "The artificial intelligence entity as a legal person". Information & Communications Technology Law. 32 (3): 277–278. doi:10.1080/13600834.2023.2196827. hdl:1820/c29a3daa-9e36-4640-85d3-d0ffdd18a62c.
  86. ^ "Robots could demand legal rights". BBC News. December 21, 2006. Archived from the original on October 15, 2019. Retrieved January 3, 2010.
  87. ^ Henderson M (April 24, 2007). "Human rights for robots? We're getting carried away". The Times Online. The Times of London. Archived from the original on May 17, 2008. Retrieved May 2, 2010.
  88. ^ "Saudi Arabia bestows citizenship on a robot named Sophia". 26 October 2017. Archived from the original on 2017-10-27. Retrieved 2017-10-27.
  89. ^ Vincent J (30 October 2017). "Pretending to give a robot citizenship helps no one". The Verge. Archived from the original on 3 August 2019. Retrieved 10 January 2019.
  90. ^ Wilks, Yorick, ed. (2010). Close engagements with artificial companions: key social, psychological, ethical and design issues. Amsterdam: John Benjamins Pub. Co. ISBN 978-90-272-4994-4. OCLC 642206106.
  91. ^ Macrae C (September 2022). "Learning from the Failure of Autonomous and Intelligent Systems: Accidents, Safety, and Sociotechnical Sources of Risk". Risk Analysis. 42 (9): 1999–2025. Bibcode:2022RiskA..42.1999M. doi:10.1111/risa.13850. ISSN 0272-4332. PMID 34814229.
  92. ^ Agarwal A, Edelman S (2020). "Functionally effective conscious AI without suffering". Journal of Artificial Intelligence and Consciousness. 7: 39–50. arXiv:2002.05652. doi:10.1142/S2705078520300030. S2CID 211096533.
  93. ^ Thomas Metzinger (February 2021). "Artificial Suffering: An Argument for a Global Moratorim on Synthetic Phenomenology". Journal of Artificial Intelligence and Consciousness. 8: 43–66. doi:10.1142/S270507852150003X. S2CID 233176465.
  94. ^ Chalmers D (March 2023). "Could a Large Language Model be Conscious?". arXiv:2303.07103v1 [Science Computer Science].
  95. ^ Birch J (2017-01-01). "Animal sentience and the precautionary principle". Animal Sentience. 2 (16). doi:10.51291/2377-7478.1200. ISSN 2377-7478. Archived from the original on 2024-08-11. Retrieved 2024-07-08.
  96. ^ Shulman C, Bostrom N (August 2021). "Sharing the World with Digital Minds" (PDF). Rethinking Moral Status.
  97. ^ Fisher R (13 November 2020). "The intelligent monster that you should let eat you". BBC News. Retrieved 12 February 2021.
  98. ^ a b
  99. ^ a b Joseph Weizenbaum, quoted in McCorduck 2004, pp. 356, 374–376
  100. ^ Kaplan A, Haenlein M (January 2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence". Business Horizons. 62 (1): 15–25. doi:10.1016/j.bushor.2018.08.004. S2CID 158433736.
  101. ^ a b Hibbard B (17 November 2015). "Ethical Artificial Intelligence". arXiv:1411.1373 [cs.AI].
  102. ^ Davies A (29 February 2016). "Google's Self-Driving Car Caused Its First Crash". Wired. Archived from the original on 7 July 2019. Retrieved 26 July 2019.
  103. ^ Levin S, Wong JC (19 March 2018). "Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian". The Guardian. Archived from the original on 26 July 2019. Retrieved 26 July 2019.
  104. ^ "Who is responsible when a self-driving car has an accident?". Futurism. 30 January 2018. Archived from the original on 2019-07-26. Retrieved 2019-07-26.
  105. ^ "Autonomous Car Crashes: Who – or What – Is to Blame?". Knowledge@Wharton. Law and Public Policy. Radio Business North America Podcasts. Archived from the original on 2019-07-26. Retrieved 2019-07-26.
  106. ^ Delbridge E. "Driverless Cars Gone Wild". The Balance. Archived from the original on 2019-05-29. Retrieved 2019-05-29.
  107. ^ Stilgoe J (2020), "Who Killed Elaine Herzberg?", Who’s Driving Innovation?, Cham: Springer International Publishing, pp. 1–6, doi:10.1007/978-3-030-32320-2_1, ISBN 978-3-030-32319-6, S2CID 214359377, archived from the original on 2021-03-18, retrieved 2020-11-11
  108. ^ Maxmen A (October 2018). "Self-driving car dilemmas reveal that moral choices are not universal". Nature. 562 (7728): 469–470. Bibcode:2018Natur.562..469M. doi:10.1038/d41586-018-07135-0. PMID 30356197.
  109. ^ "Regulations for driverless cars". GOV.UK. Archived from the original on 2019-07-26. Retrieved 2019-07-26.
  110. ^ "Automated Driving: Legislative and Regulatory Action – CyberWiki". cyberlaw.stanford.edu. Archived from the original on 2019-07-26. Retrieved 2019-07-26.
  111. ^ "Autonomous Vehicles | Self-Driving Vehicles Enacted Legislation". www.ncsl.org. Archived from the original on 2019-07-26. Retrieved 2019-07-26.
  112. ^ Etzioni A, Etzioni O (2017-12-01). "Incorporating Ethics into Artificial Intelligence". The Journal of Ethics. 21 (4): 403–418. doi:10.1007/s10892-017-9252-2. ISSN 1572-8609. S2CID 254644745.
  113. ^ Call for debate on killer robots Archived 2009-08-07 at the Wayback Machine, By Jason Palmer, Science and technology reporter, BBC News, 8/3/09.
  114. ^ Science New Navy-funded Report Warns of War Robots Going "Terminator" Archived 2009-07-28 at the Wayback Machine, by Jason Mick (Blog), dailytech.com, February 17, 2009.
  115. ^ a b Navy report warns of robot uprising, suggests a strong moral compass Archived 2011-06-04 at the Wayback Machine, by Joseph L. Flatley engadget.com, Feb 18th 2009.
  116. ^ AAAI Presidential Panel on Long-Term AI Futures 2008–2009 Study Archived 2009-08-28 at the Wayback Machine, Association for the Advancement of Artificial Intelligence, Accessed 7/26/09.
  117. ^ United States. Defense Innovation Board. AI principles: recommendations on the ethical use of artificial intelligence by the Department of Defense. OCLC 1126650738.
  118. ^ New Navy-funded Report Warns of War Robots Going "Terminator" Archived 2009-07-28 at the Wayback Machine, by Jason Mick (Blog), dailytech.com, February 17, 2009.
  119. ^ Umbrello S, Torres P, De Bellis AF (March 2020). "The future of war: could lethal autonomous weapons make conflict more ethical?". AI & Society. 35 (1): 273–282. doi:10.1007/s00146-019-00879-x. hdl:2318/1699364. ISSN 0951-5666. S2CID 59606353. Archived from the original on 2021-01-05. Retrieved 2020-11-11.
  120. ^ Jamison M (2024-12-20). "DARPA Launches Ethics Program for Autonomous Systems". executivegov.com. Retrieved 2025-01-02.
  121. ^ "DARPA's ASIMOV seeks to develop Ethical Standards for Autonomous Systems". Space Daily. Retrieved 2025-01-02.
  122. ^ Hellström T (June 2013). "On the moral responsibility of military robots". Ethics and Information Technology. 15 (2): 99–107. doi:10.1007/s10676-012-9301-2. S2CID 15205810. ProQuest 1372020233.
  123. ^ Mitra A (5 April 2018). "We can train AI to identify good and evil, and then use it to teach us morality". Quartz. Archived from the original on 2019-07-26. Retrieved 2019-07-26.
  124. ^ Dominguez G (23 August 2022). "South Korea developing new stealthy drones to support combat aircraft". The Japan Times. Retrieved 14 June 2023.
  125. ^ "AI Principles". Future of Life Institute. 11 August 2017. Archived from the original on 2017-12-11. Retrieved 2019-07-26.
  126. ^ a b Zach Musgrave and Bryan W. Roberts (2015-08-14). "Why Artificial Intelligence Can Too Easily Be Weaponized – The Atlantic". The Atlantic. Archived from the original on 2017-04-11. Retrieved 2017-03-06.
  127. ^ Cat Zakrzewski (2015-07-27). "Musk, Hawking Warn of Artificial Intelligence Weapons". WSJ. Archived from the original on 2015-07-28. Retrieved 2017-08-04.
  128. ^ "Potential Risks from Advanced Artificial Intelligence". Open Philanthropy. August 11, 2015. Retrieved 2024-04-07.
  129. ^ a b Bachulska A, Leonard M, Oertel J (2 July 2024). The Idea of China: Chinese Thinkers on Power, Progress, and People (EPUB). Berlin, Germany: European Council on Foreign Relations. ISBN 978-1-916682-42-9. Archived from the original on 17 July 2024. Retrieved 22 July 2024.
  130. ^ Brandon Vigliarolo. "International military AI summit ends with 60-state pledge". www.theregister.com. Retrieved 2023-02-17.
  131. ^ a b Markoff J (25 July 2009). "Scientists Worry Machines May Outsmart Man". The New York Times. Archived from the original on 25 February 2017. Retrieved 24 February 2017.
  132. ^ Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics" Archived 2015-05-07 at the Wayback Machine. In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin: Springer.
  133. ^ Bostrom, Nick. 2003. "Ethical Issues in Advanced Artificial Intelligence" Archived 2018-10-08 at the Wayback Machine. In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, 12–17. Vol. 2. Windsor, ON: International Institute for Advanced Studies in Systems Research / Cybernetics.
  134. ^ Bostrom N (2017). Superintelligence: paths, dangers, strategies. Oxford, United Kingdom: Oxford University Press. ISBN 978-0-19-967811-2.
  135. ^ Umbrello S, Baum SD (2018-06-01). "Evaluating future nanotechnology: The net societal impacts of atomically precise manufacturing". Futures. 100: 63–73. doi:10.1016/j.futures.2018.04.007. hdl:2318/1685533. ISSN 0016-3287. S2CID 158503813. Archived from the original on 2019-05-09. Retrieved 2020-11-29.
  136. ^ Yudkowsky, Eliezer. 2011. "Complex Value Systems in Friendly AI" Archived 2015-09-29 at the Wayback Machine. In Schmidhuber, Thórisson, and Looks 2011, 388–393.
  137. ^ Russell S (October 8, 2019). Human Compatible: Artificial Intelligence and the Problem of Control. United States: Viking. ISBN 978-0-525-55861-3. OCLC 1083694322.
  138. ^ Yampolskiy RV (2020-03-01). "Unpredictability of AI: On the Impossibility of Accurately Predicting All Actions of a Smarter Agent". Journal of Artificial Intelligence and Consciousness. 07 (1): 109–118. doi:10.1142/S2705078520500034. ISSN 2705-0785. S2CID 218916769. Archived from the original on 2021-03-18. Retrieved 2020-11-29.
  139. ^ Wallach W, Vallor S (2020-09-17), "Moral Machines: From Value Alignment to Embodied Virtue", Ethics of Artificial Intelligence, Oxford University Press, pp. 383–412, doi:10.1093/oso/9780190905033.003.0014, ISBN 978-0-19-090503-3, archived from the original on 2020-12-08, retrieved 2020-11-29
  140. ^ Umbrello S (2019). "Beneficial Artificial Intelligence Coordination by Means of a Value Sensitive Design Approach". Big Data and Cognitive Computing. 3 (1): 5. doi:10.3390/bdcc3010005. hdl:2318/1685727.
  141. ^ Floridi L, Cowls J, King TC, Taddeo M (2020). "How to Design AI for Social Good: Seven Essential Factors". Science and Engineering Ethics. 26 (3): 1771–1796. doi:10.1007/s11948-020-00213-5. ISSN 1353-3452. PMC 7286860. PMID 32246245.
  142. ^ "NeMo Guardrails". NeMo Guardrails. Retrieved 2024-12-06.
  143. ^ "Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations". Meta.com. Retrieved 2024-12-06.
  144. ^ a b Šekrst K, McHugh J, Cefalu JR (2024). "AI Ethics by Design: Implementing Customizable Guardrails for Responsible AI Development". arXiv:2411.14442 [cs.CY].
  145. ^ "NVIDIA NeMo Guardrails". NVIDIA NeMo Guardrails. Retrieved 2024-12-06.
  146. ^ Inan H, Upasani K, Chi J, Rungta R, Iyer K, Mao Y, Tontchev M, Hu Q, Fuller B, Testuggine D, Khabsa M (2023). "Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations". arXiv:2312.06674 [cs.CL].
  147. ^ Dong Y, Mu R, Jin G, Qi Y, Hu J, Zhao X, Meng J, Ruan W, Huang X (2024). "Building Guardrails for Large Language Models". arXiv:2402.01822 [cs].
  148. ^ Fiegerman S (28 September 2016). "Facebook, Google, Amazon create group to ease AI concerns". CNNMoney. Archived from the original on 17 September 2020. Retrieved 18 August 2020.
  149. ^ Slota SC, Fleischmann KR, Greenberg S, Verma N, Cummings B, Li L, Shenefiel C (2023). "Locating the work of artificial intelligence ethics". Journal of the Association for Information Science and Technology. 74 (3): 311–322. doi:10.1002/asi.24638. ISSN 2330-1635. S2CID 247342066. Archived from the original on 2024-09-25. Retrieved 2023-07-21.
  150. ^ "Ethics guidelines for trustworthy AI". Shaping Europe’s digital future – European Commission. European Commission. 2019-04-08. Archived from the original on 2020-02-20. Retrieved 2020-02-20.
  151. ^ "White Paper on Artificial Intelligence – a European approach to excellence and trust | Shaping Europe's digital future". 19 February 2020. Archived from the original on 2021-03-06. Retrieved 2021-03-18.
  152. ^ "OECD AI Policy Observatory". Archived from the original on 2021-03-08. Retrieved 2021-03-18.
  153. ^ Recommendation on the Ethics of Artificial Intelligence. UNESCO. 2021.
  154. ^ "UNESCO member states adopt first global agreement on AI ethics". Helsinki Times. 2021-11-26. Archived from the original on 2024-09-25. Retrieved 2023-04-26.
  155. ^ "The Obama Administration's Roadmap for AI Policy". Harvard Business Review. 2016-12-21. ISSN 0017-8012. Archived from the original on 2021-01-22. Retrieved 2021-03-16.
  156. ^ "Accelerating America's Leadership in Artificial Intelligence – The White House". trumpwhitehouse.archives.gov. Archived from the original on 2021-02-25. Retrieved 2021-03-16.
  157. ^ "Request for Comments on a Draft Memorandum to the Heads of Executive Departments and Agencies, "Guidance for Regulation of Artificial Intelligence Applications"". Federal Register. 2020-01-13. Archived from the original on 2020-11-25. Retrieved 2020-11-28.
  158. ^ "CCC Offers Draft 20-Year AI Roadmap; Seeks Comments". HPCwire. 2019-05-14. Archived from the original on 2021-03-18. Retrieved 2019-07-22.
  159. ^ "Request Comments on Draft: A 20-Year Community Roadmap for AI Research in the US » CCC Blog". 13 May 2019. Archived from the original on 2019-05-14. Retrieved 2019-07-22.
  160. ^ (in Russian) Интеллектуальные правила Archived 2021-12-30 at the Wayback MachineKommersant, 25.11.2021
  161. ^ Grace K, Salvatier J, Dafoe A, Zhang B, Evans O (2018-05-03). "When Will AI Exceed Human Performance? Evidence from AI Experts". arXiv:1705.08807 [cs.AI].
  162. ^ "China wants to shape the global future of artificial intelligence". MIT Technology Review. Archived from the original on 2020-11-20. Retrieved 2020-11-29.
  163. ^ Floridi L, Cowls J, Beltrametti M, Chatila R, Chazerand P, Dignum V, Luetge C, Madelin R, Pagallo U, Rossi F, Schafer B (2018-12-01). "AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations". Minds and Machines. 28 (4): 689–707. doi:10.1007/s11023-018-9482-5. ISSN 1572-8641. PMC 6404626. PMID 30930541.
  164. ^ "Joanna J. Bryson". WIRED. Archived from the original on 15 March 2023. Retrieved 13 January 2023.
  165. ^ "New Artificial Intelligence Research Institute Launches". 2017-11-20. Archived from the original on 2020-09-18. Retrieved 2021-02-21.
  166. ^ James J. Hughes, LaGrandeur, Kevin, eds. (15 March 2017). Surviving the machine age: intelligent technology and the transformation of human work. Cham, Switzerland: Palgrave Macmillan Cham. ISBN 978-3-319-51165-8. OCLC 976407024. Archived from the original on 18 March 2021. Retrieved 29 November 2020.
  167. ^ Danaher, John (2019). Automation and utopia: human flourishing in a world without work. Cambridge, Massachusetts: Harvard University Press. ISBN 978-0-674-24220-3. OCLC 1114334813.
  168. ^ "TUM Institute for Ethics in Artificial Intelligence officially opened". www.tum.de. Archived from the original on 2020-12-10. Retrieved 2020-11-29.
  169. ^ Communications PK (2019-01-25). "Harvard works to embed ethics in computer science curriculum". Harvard Gazette. Archived from the original on 2024-09-25. Retrieved 2023-04-06.
  170. ^ Lee J (2020-02-08). "When Bias Is Coded Into Our Technology". NPR. Retrieved 2021-12-22.
  171. ^ "How one conference embraced diversity". Nature. 564 (7735): 161–162. 2018-12-12. doi:10.1038/d41586-018-07718-x. PMID 31123357. S2CID 54481549.
  172. ^ Roose K (2020-12-30). "The 2020 Good Tech Awards". The New York Times. ISSN 0362-4331. Retrieved 2021-12-21.
  173. ^ Lodge P (2014). "Leibniz's Mill Argument Against Mechanical Materialism Revisited". Ergo, an Open Access Journal of Philosophy. 1 (20201214). doi:10.3998/ergo.12405314.0001.003. hdl:2027/spo.12405314.0001.003. ISSN 2330-4014.
  174. ^ Bringsjord S, Govindarajulu NS (2020), "Artificial Intelligence", in Zalta EN, Nodelman U (eds.), The Stanford Encyclopedia of Philosophy (Summer 2020 ed.), Metaphysics Research Lab, Stanford University, archived from the original on 2022-03-08, retrieved 2023-12-08
  175. ^ Kulesz, O. (2018). "Culture, Platforms and Machines". UNESCO, Paris.
  176. ^ Jr HC (1999-04-29). Information Technology and the Productivity Paradox: Assessing the Value of Investing in IT. Oxford University Press. ISBN 978-0-19-802838-3. Archived from the original on 2024-09-25. Retrieved 2024-02-21.
  177. ^ Asimov I (2008). I, Robot. New York: Bantam. ISBN 978-0-553-38256-3.
  178. ^ Bryson J, Diamantis M, Grant T (September 2017). "Of, for, and by the people: the legal lacuna of synthetic persons". Artificial Intelligence and Law. 25 (3): 273–291. doi:10.1007/s10506-017-9214-9.
  179. ^ "Principles of robotics". UK's EPSRC. September 2010. Archived from the original on 1 April 2018. Retrieved 10 January 2019.
  180. ^ Yudkowsky E (July 2004). "Why We Need Friendly AI". 3 laws unsafe. Archived from the original on May 24, 2012.
  181. ^ Aleksander I (March 2017). "Partners of Humans: A Realistic Assessment of the Role of Robots in the Foreseeable Future". Journal of Information Technology. 32 (1): 1–9. doi:10.1057/s41265-016-0032-4. ISSN 0268-3962. S2CID 5288506. Archived from the original on 2024-02-21. Retrieved 2024-02-21.
  182. ^ Evolving Robots Learn To Lie To Each Other Archived 2009-08-28 at the Wayback Machine, Popular Science, August 18, 2009
  183. ^ Bassett C, Steinmueller E, Voss G. "Better Made Up: The Mutual Influence of Science Fiction and Innovation". Nesta. Archived from the original on 3 May 2024. Retrieved 3 May 2024.
  184. ^ Velasco G (2020-05-04). "Science-Fiction: A Mirror for the Future of Humankind". IDEES. Archived from the original on 2021-04-22. Retrieved 2023-12-08.
  185. ^ "Love, Death & Robots season 2, episode 1 recap - "Automated Customer Service"". Ready Steady Cut. 2021-05-14. Archived from the original on 2021-12-21. Retrieved 2021-12-21.
  186. ^ Cave, Stephen, Dihal, Kanta, Dillon, Sarah, eds. (14 February 2020). AI narratives: a history of imaginative thinking about intelligent machines (First ed.). Oxford: Oxford University Press. ISBN 978-0-19-258604-9. OCLC 1143647559. Archived from the original on 18 March 2021. Retrieved 11 November 2020.
  187. ^ Jerreat-Poole A (1 February 2020). "Sick, Slow, Cyborg: Crip Futurity in Mass Effect". Game Studies. 20. ISSN 1604-7982. Archived from the original on 9 December 2020. Retrieved 11 November 2020.
  188. ^ ""Detroit: Become Human" Will Challenge your Morals and your Humanity". Coffee or Die Magazine. 2018-08-06. Archived from the original on 2021-12-09. Retrieved 2021-12-07.
  189. ^ Cerqui D, Warwick K (2008), "Re-Designing Humankind: The Rise of Cyborgs, a Desirable Goal?", Philosophy and Design, Dordrecht: Springer Netherlands, pp. 185–195, doi:10.1007/978-1-4020-6591-0_14, ISBN 978-1-4020-6590-3, archived from the original on 2021-03-18, retrieved 2020-11-11
  190. ^ Cave S, Dihal K (6 August 2020). "The Whiteness of AI". Philosophy & Technology. 33 (4): 685–703. doi:10.1007/s13347-020-00415-6. S2CID 225466550.
[edit]