Jump to content

Talk:Artificial intelligence/Basics: Difference between revisions

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Content deleted Content added
Created page with '== Basics == <!-- This section is for explaining, to non-specialists, core concepts that are helpful for understanding AI; feel free to greatly expand or even draw out into its own "Introduction to AI" article, similar to Introduction to Quantum Mechanics --> Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its go...'
 
Line 26: Line 26:
[[File:Détection de personne - exemple 3.jpg|thumb|A self-driving car system may use a neural network to determine which parts of the picture seem to match previous training images of pedestrians, and then model those areas as slow-moving but somewhat unpredictable rectangular prisms that must be avoided.]]
[[File:Détection de personne - exemple 3.jpg|thumb|A self-driving car system may use a neural network to determine which parts of the picture seem to match previous training images of pedestrians, and then model those areas as slow-moving but somewhat unpredictable rectangular prisms that must be avoided.]]
Compared with humans, existing AI lacks several features of human "[[commonsense reasoning]]"; most notably, humans have powerful mechanisms for reasoning about "[[naïve physics]]" such as space, time, and physical interactions. This enables even young children to easily make inferences like "If I roll this pen off a table, it will fall on the floor". Humans also have a powerful mechanism of "[[folk psychology]]" that helps them to interpret natural-language sentences such as "The city councilmen refused the demonstrators a permit because they advocated violence" (A generic AI has difficulty discerning whether the ones alleged to be advocating violence are the councilmen or the demonstrators<ref>{{cite news|title=Cultivating Common Sense {{!}} DiscoverMagazine.com|url=http://discovermagazine.com/2017/april-2017/cultivating-common-sense|access-date=24 March 2018|work=Discover Magazine|date=2017|archive-url=https://web.archive.org/web/20180325045222/http://discovermagazine.com/2017/april-2017/cultivating-common-sense|archive-date=25 March 2018|url-status=dead}}</ref><ref>{{cite journal |last1=Davis |first1=Ernest |last2=Marcus |first2=Gary |title=Commonsense reasoning and commonsense knowledge in artificial intelligence|journal=Communications of the ACM|date=24 August 2015|volume=58|issue=9|pages=92–103 |doi=10.1145/2701413 |s2cid=13583137 |url=https://cacm.acm.org/magazines/2015/9/191169-commonsense-reasoning-and-commonsense-knowledge-in-artificial-intelligence/|access-date=6 April 2020|archive-date=22 August 2020|archive-url=https://web.archive.org/web/20200822201550/https://cacm.acm.org/magazines/2015/9/191169-commonsense-reasoning-and-commonsense-knowledge-in-artificial-intelligence/fulltext|url-status=live}}</ref><ref>{{cite journal |last1=Winograd|first1=Terry|title=Understanding natural language|journal=Cognitive Psychology|date=January 1972|volume=3|issue=1|pages=1–191|doi=10.1016/0010-0285(72)90002-3}}</ref>). This lack of "common knowledge" means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.<ref>{{cite news|title=Don't worry: Autonomous cars aren't coming tomorrow (or next year)|url=http://autoweek.com/article/technology/fully-autonomous-vehicles-are-more-decade-down-road|access-date=24 March 2018|work=Autoweek|date=2016|archive-date=25 March 2018|archive-url=https://web.archive.org/web/20180325052230/http://autoweek.com/article/technology/fully-autonomous-vehicles-are-more-decade-down-road|url-status=live}}</ref><ref>{{cite news|last1=Knight|first1=Will |title=Boston may be famous for bad drivers, but it's the testing ground for a smarter self-driving car |url=https://www.technologyreview.com/s/608871/finally-a-driverless-car-with-some-common-sense/ |access-date=27 March 2018|work=MIT Technology Review|date=2017|archive-date=22 August 2020|archive-url=https://web.archive.org/web/20200822201548/https://www.technologyreview.com/2017/09/20/149046/finally-a-driverless-car-with-some-common-sense/|url-status=live}}</ref><ref>{{cite journal|last1=Prakken|first1=Henry |title=On the problem of making autonomous vehicles conform to traffic law|journal=Artificial Intelligence and Law|date=31 August 2017|volume=25|issue=3|pages=341–363 |doi=10.1007/s10506-017-9210-0 |doi-access=free}}</ref>
Compared with humans, existing AI lacks several features of human "[[commonsense reasoning]]"; most notably, humans have powerful mechanisms for reasoning about "[[naïve physics]]" such as space, time, and physical interactions. This enables even young children to easily make inferences like "If I roll this pen off a table, it will fall on the floor". Humans also have a powerful mechanism of "[[folk psychology]]" that helps them to interpret natural-language sentences such as "The city councilmen refused the demonstrators a permit because they advocated violence" (A generic AI has difficulty discerning whether the ones alleged to be advocating violence are the councilmen or the demonstrators<ref>{{cite news|title=Cultivating Common Sense {{!}} DiscoverMagazine.com|url=http://discovermagazine.com/2017/april-2017/cultivating-common-sense|access-date=24 March 2018|work=Discover Magazine|date=2017|archive-url=https://web.archive.org/web/20180325045222/http://discovermagazine.com/2017/april-2017/cultivating-common-sense|archive-date=25 March 2018|url-status=dead}}</ref><ref>{{cite journal |last1=Davis |first1=Ernest |last2=Marcus |first2=Gary |title=Commonsense reasoning and commonsense knowledge in artificial intelligence|journal=Communications of the ACM|date=24 August 2015|volume=58|issue=9|pages=92–103 |doi=10.1145/2701413 |s2cid=13583137 |url=https://cacm.acm.org/magazines/2015/9/191169-commonsense-reasoning-and-commonsense-knowledge-in-artificial-intelligence/|access-date=6 April 2020|archive-date=22 August 2020|archive-url=https://web.archive.org/web/20200822201550/https://cacm.acm.org/magazines/2015/9/191169-commonsense-reasoning-and-commonsense-knowledge-in-artificial-intelligence/fulltext|url-status=live}}</ref><ref>{{cite journal |last1=Winograd|first1=Terry|title=Understanding natural language|journal=Cognitive Psychology|date=January 1972|volume=3|issue=1|pages=1–191|doi=10.1016/0010-0285(72)90002-3}}</ref>). This lack of "common knowledge" means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.<ref>{{cite news|title=Don't worry: Autonomous cars aren't coming tomorrow (or next year)|url=http://autoweek.com/article/technology/fully-autonomous-vehicles-are-more-decade-down-road|access-date=24 March 2018|work=Autoweek|date=2016|archive-date=25 March 2018|archive-url=https://web.archive.org/web/20180325052230/http://autoweek.com/article/technology/fully-autonomous-vehicles-are-more-decade-down-road|url-status=live}}</ref><ref>{{cite news|last1=Knight|first1=Will |title=Boston may be famous for bad drivers, but it's the testing ground for a smarter self-driving car |url=https://www.technologyreview.com/s/608871/finally-a-driverless-car-with-some-common-sense/ |access-date=27 March 2018|work=MIT Technology Review|date=2017|archive-date=22 August 2020|archive-url=https://web.archive.org/web/20200822201548/https://www.technologyreview.com/2017/09/20/149046/finally-a-driverless-car-with-some-common-sense/|url-status=live}}</ref><ref>{{cite journal|last1=Prakken|first1=Henry |title=On the problem of making autonomous vehicles conform to traffic law|journal=Artificial Intelligence and Law|date=31 August 2017|volume=25|issue=3|pages=341–363 |doi=10.1007/s10506-017-9210-0 |doi-access=free}}</ref>
== Notes ==
{{reflist}}
== References ==
* {{cite book
|first1=Pedro |last1=Domingos |author-link=Pedro Domingos
|year=2015
|title=The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World
|publisher = Basic Books
|isbn=978-0-465-06192-1
|title-link=The Master Algorithm }}

Revision as of 16:56, 2 July 2021

Basics

Computer science defines AI research as the study of "intelligent agents": any device that perceives its environment and takes actions that maximize its chance of successfully achieving its goals.[a] A more elaborate definition characterizes AI as "a system's ability to correctly interpret external data, to learn from such data, and to use those learnings to achieve specific goals and tasks through flexible adaptation."[1]

A typical AI analyzes its environment and takes actions that maximize its chance of success.[a] An AI's intended utility function (or goal) can be simple ("1 if the AI wins a game of Go, 0 otherwise") or complex ("Perform actions mathematically similar to ones that succeeded in the past"). Goals can be explicitly defined or induced. If the AI is programmed for "reinforcement learning", goals can be implicitly induced by rewarding some types of behavior or punishing others.[b] Alternatively, an evolutionary system can induce goals by using a "fitness function" to mutate and preferentially replicate high-scoring AI systems, similar to how animals evolved to innately desire certain goals such as finding food.[2] Some AI systems, such as nearest-neighbor, instead of reason by analogy, these systems are not generally given goals, except to the degree that goals are implicit in their training data.[3] Such systems can still be benchmarked if the non-goal system is framed as a system whose "goal" is to accomplish its narrow classification task.[4]

AI often revolves around the use of algorithms. An algorithm is a set of unambiguous instructions that a mechanical computer can execute.[c] A complex algorithm is often built on top of other, simpler, algorithms. A simple example of an algorithm is the following (optimal for first player) recipe for play at tic-tac-toe:[5]

  1. If someone has a "threat" (that is, two in a row), take the remaining square. Otherwise,
  2. if a move "forks" to create two threats at once, play that move. Otherwise,
  3. take the center square if it is free. Otherwise,
  4. if your opponent has played in a corner, take the opposite corner. Otherwise,
  5. take an empty corner if one exists. Otherwise,
  6. take any empty square.

Many AI algorithms are capable of learning from data; they can enhance themselves by learning new heuristics (strategies, or "rules of thumb", that have worked well in the past), or can themselves write other algorithms. Some of the "learners" described below, including Bayesian networks, decision trees, and nearest-neighbor, could theoretically, (given infinite data, time, and memory) learn to approximate any function, including which combination of mathematical functions would best describe the world.[citation needed] These learners could therefore derive all possible knowledge, by considering every possible hypothesis and matching them against the data. In practice, it is seldom possible to consider every possibility, because of the phenomenon of "combinatorial explosion", where the time needed to solve a problem grows exponentially. Much of AI research involves figuring out how to identify and avoid considering a broad range of possibilities unlikely to be beneficial.[6][7] For example, when viewing a map and looking for the shortest driving route from Denver to New York in the East, one can in most cases skip looking at any path through San Francisco or other areas far to the West; thus, an AI wielding a pathfinding algorithm like A* can avoid the combinatorial explosion that would ensue if every possible route had to be ponderously considered.[8]

The earliest (and easiest to understand) approach to AI was symbolism (such as formal logic): "If an otherwise healthy adult has a fever, then they may have influenza". A second, more general, approach is Bayesian inference: "If the current patient has a fever, adjust the probability they have influenza in such-and-such way". The third major approach, extremely popular in routine business AI applications, are analogizers such as SVM and nearest-neighbor: "After examining the records of known past patients whose temperature, symptoms, age, and other factors mostly match the current patient, X% of those patients turned out to have influenza". A fourth approach is harder to intuitively understand, but is inspired by how the brain's machinery works: the artificial neural network approach uses artificial "neurons" that can learn by comparing itself to the desired output and altering the strengths of the connections between its internal neurons to "reinforce" connections that seemed to be useful. These four main approaches can overlap with each other and with evolutionary systems; for example, neural nets can learn to make inferences, to generalize, and to make analogies. Some systems implicitly or explicitly use multiple of these approaches, alongside many other AI and non-AI algorithms; the best approach is often different depending on the problem.[9][10]

Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as "since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well". They can be nuanced, such as "X% of families have geographically separate species with color variants, so there is a Y% chance that undiscovered black swans exist". Learners also work on the basis of "Occam's razor": The simplest theory that explains the data is the likeliest. Therefore, according to Occam's razor principle, a learner must be designed such that it prefers simpler theories to complex theories, except in cases where the complex theory is proven substantially better.

The blue line could be an example of overfitting a linear function due to random noise.

Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is.[11] Besides classic overfitting, learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses.[12] A real-world example is that, unlike humans, current image classifiers often don't primarily make judgments from the spatial relationship between components of the picture, and they learn relationships between pixels that humans are oblivious to, but that still correlate with images of certain types of real objects. Modifying these patterns on a legitimate image can result in "adversarial" images that the system misclassifies.[d][13][14]

A self-driving car system may use a neural network to determine which parts of the picture seem to match previous training images of pedestrians, and then model those areas as slow-moving but somewhat unpredictable rectangular prisms that must be avoided.

Compared with humans, existing AI lacks several features of human "commonsense reasoning"; most notably, humans have powerful mechanisms for reasoning about "naïve physics" such as space, time, and physical interactions. This enables even young children to easily make inferences like "If I roll this pen off a table, it will fall on the floor". Humans also have a powerful mechanism of "folk psychology" that helps them to interpret natural-language sentences such as "The city councilmen refused the demonstrators a permit because they advocated violence" (A generic AI has difficulty discerning whether the ones alleged to be advocating violence are the councilmen or the demonstrators[15][16][17]). This lack of "common knowledge" means that AI often makes different mistakes than humans make, in ways that can seem incomprehensible. For example, existing self-driving cars cannot reason about the location nor the intentions of pedestrians in the exact way that humans do, and instead must use non-human modes of reasoning to avoid accidents.[18][19][20]

Notes

  1. ^ Kaplan, Andreas; Haenlein, Michael (1 January 2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence". Business Horizons. 62 (1): 15–25. doi:10.1016/j.bushor.2018.08.004.
  2. ^ Domingos 2015, Chapter 5.
  3. ^ Domingos 2015, Chapter 7.
  4. ^ Lindenbaum, M., Markovitch, S., & Rusakov, D. (2004). Selective sampling for nearest neighbor classifiers. Machine learning, 54(2), 125–152.
  5. ^ Domingos 2015, Chapter 1.
  6. ^ Cite error: The named reference Intractability was invoked but never defined (see the help page).
  7. ^ Domingos 2015, Chapter 2, Chapter 3.
  8. ^ Hart, P. E.; Nilsson, N. J.; Raphael, B. (1972). "Correction to "A Formal Basis for the Heuristic Determination of Minimum Cost Paths"". SIGART Newsletter (37): 28–29. doi:10.1145/1056777.1056779. S2CID 6386648.
  9. ^ Domingos 2015, Chapter 2, Chapter 4, Chapter 6.
  10. ^ "Can neural network computers learn from experience, and if so, could they ever become what we would call 'smart'?". Scientific American. 2018. Archived from the original on 25 March 2018. Retrieved 24 March 2018.
  11. ^ Domingos 2015, Chapter 6, Chapter 7.
  12. ^ Domingos 2015, p. 286.
  13. ^ "Single pixel change fools AI programs". BBC News. 3 November 2017. Archived from the original on 22 March 2018. Retrieved 12 March 2018.
  14. ^ "AI Has a Hallucination Problem That's Proving Tough to Fix". WIRED. 2018. Archived from the original on 12 March 2018. Retrieved 12 March 2018.
  15. ^ "Cultivating Common Sense | DiscoverMagazine.com". Discover Magazine. 2017. Archived from the original on 25 March 2018. Retrieved 24 March 2018.
  16. ^ Davis, Ernest; Marcus, Gary (24 August 2015). "Commonsense reasoning and commonsense knowledge in artificial intelligence". Communications of the ACM. 58 (9): 92–103. doi:10.1145/2701413. S2CID 13583137. Archived from the original on 22 August 2020. Retrieved 6 April 2020.
  17. ^ Winograd, Terry (January 1972). "Understanding natural language". Cognitive Psychology. 3 (1): 1–191. doi:10.1016/0010-0285(72)90002-3.
  18. ^ "Don't worry: Autonomous cars aren't coming tomorrow (or next year)". Autoweek. 2016. Archived from the original on 25 March 2018. Retrieved 24 March 2018.
  19. ^ Knight, Will (2017). "Boston may be famous for bad drivers, but it's the testing ground for a smarter self-driving car". MIT Technology Review. Archived from the original on 22 August 2020. Retrieved 27 March 2018.
  20. ^ Prakken, Henry (31 August 2017). "On the problem of making autonomous vehicles conform to traffic law". Artificial Intelligence and Law. 25 (3): 341–363. doi:10.1007/s10506-017-9210-0.

References


Cite error: There are <ref group=lower-alpha> tags or {{efn}} templates on this page, but the references will not show without a {{reflist|group=lower-alpha}} template or {{notelist}} template (see the help page).