Neats and scruffies: Difference between revisions
m fix link to Soar |
m Fixing broken anchor: Reminder of an inactive anchor: initiated in 1984 |
||
(238 intermediate revisions by 97 users not shown) | |||
Line 1: | Line 1: | ||
{{short description|Forms of artificial intelligence research}} |
|||
In [[artificial intelligence]], the labels '''neats''' and '''scruffies''' are used to refer to one of the continuing [[holy war]]s in artificial intelligence research. This conflict is over a serious concern: what is the best way to design an intelligent system? ''Neats'' consider that solutions should be elegant, clear and provably correct. ''Scruffies'' believe that intelligence is too complicated (or computationally intractable) to be solved with the sorts of homogenous system such neat requirements usually mandate. |
|||
In the [[history of artificial intelligence]] (AI), '''neat''' and '''scruffy''' are two contrasting approaches to AI research. The distinction was made in the 1970s, and was a subject of discussion until the mid-1980s.{{sfn|McCorduck|2004|pp=421–424, 486–489}}{{sfn|Crevier|1993|p=168}}{{sfn|Nilsson|1983|pp=10–11}} |
|||
"Neats" use algorithms based on a single formal paradigm, such as [[logic]], [[mathematical optimization]], or [[Neural network (machine learning)|neural networks]]. Neats verify their programs are correct via rigorous mathematical theory. Neat researchers and analysts tend to express the hope that this single formal paradigm can be extended and improved in order to achieve [[Artificial general intelligence|general intelligence]] and [[superintelligence]]. |
|||
As is often the case with holy wars, much success in AI has come from combining neat and scruffy approaches. For example, there are many [[cognitive model]]s matching human [[psychological]] data built in [[Soar (cognitive architecture)|Soar]] (Newell 1990) and [[ACT-R]]. Both of these systems have formal representations and execution systems, but the rules put ''into'' the systems to create the models are generated [[ad hoc]]. |
|||
"Scruffies" use any number of different algorithms and methods to achieve intelligent behavior, and rely on incremental testing to verify their programs. Scruffy programming requires large amounts of [[hand coding]] and [[knowledge engineering]]. Scruffy experts have argued that general intelligence can only be implemented by solving a large number of essentially unrelated problems, and that there is no [[silver bullet]] that will allow programs to develop general intelligence autonomously. |
|||
[[John Brockman (literary agent)|John Brockman]] compares the neat approach to [[physics]], in that it uses simple mathematical models as its foundation. The scruffy approach is more [[Biology|biological]], in that much of the work involves studying and categorizing diverse phenomena.{{efn|name="chomsky" |
|||
== Typical Methodologies == |
|||
|[[John Brockman (literary agent)|John Brockman]] writes "Chomsky has always adopted the physicist's philosophy of science, which is that you have hypotheses you check out, and that you could be wrong. This is absolutely antithetical to the AI philosophy of science, which is much more like the way a biologist looks at the world. The biologist's philosophy of science says that human beings are what they are, you find what you find, you try to understand it, categorize it, name it, and organize it. If you build a model and it doesn't work quite right, you have to fix it. It's much more of a "discovery" view of the world."{{sfn|Brockman|1996|loc=[https://www.edge.org/conversation/information-is-surprises Chapter 9: Information is Surprises]}} |
|||
}} |
|||
Modern AI has elements of both scruffy and neat approaches. Scruffy AI researchers in the 1990s applied mathematical rigor to their programs, as neat experts did.{{sfn|Russell|Norvig|2021|p=24}}{{sfn|McCorduck|2004|p=487}} They also express the hope that there is a single paradigm (a "master algorithm") that will cause general intelligence and superintelligence to emerge.{{sfn|Domingos|2015}} But modern AI also resembles the scruffies:{{sfn|Russell|Norvig|2021|p=26}} modern [[machine learning]] applications require a great deal of hand-tuning and incremental testing; while the general algorithm is mathematically rigorous, accomplishing the specific goals of a particular application is not. Also, in the early 2000s, the field of [[software development]] embraced [[extreme programming]], which is a modern version of the scruffy methodology: try things and test them, without wasting time looking for more elegant or general solutions. |
|||
==Origin in the 1970s== |
|||
As might be guessed from the terms, ''neats'' use [[formal methods]] – such as [[logic]] or pure applied [[statistics]] – exclusively. ''Scruffies'' are [[hackers]], who will cobble together a system built of anything – even [[logic]]. Neats care whether their reasoning is both [[proof theory|provably]] [[sound]] and [[complete]] and that their [[machine learning]] systems can be shown to [[converge]] in a known length of time. Scruffies would like their learning to converge too, but they are happier if [[empirical]] experience shows their systems working than to have mere [[equations]] and [[proof]]s showing that they ''ought'' to. |
|||
The distinction between neat and scruffy originated in the mid-1970s, by [[Roger Schank]]. Schank used the terms to characterize the difference between his work on [[natural language processing]] (which represented [[Commonsense knowledge (artificial intelligence)|commonsense knowledge]] in the form of large amorphous [[semantic networks]]) from the work of [[John McCarthy (computer scientist)|John McCarthy]], [[Allen Newell]], [[Herbert A. Simon]], [[Robert Kowalski]] and others whose work was based on logic and formal extensions of logic.{{sfn|Crevier|1993|p=168}} Schank described himself as an AI scruffy. He made this distinction in linguistics, arguing strongly against Chomsky's view of language.{{efn|name="chomsky"}} |
|||
The distinction was also partly geographical and cultural: "scruffy" attributes were exemplified by AI research at [[MIT]] under [[Marvin Minsky]] in the 1970s. The laboratory was famously "freewheeling" and researchers often developed AI programs by spending long hours fine-tuning programs until they showed the required behavior. Important and influential "scruffy" programs developed at MIT included [[Joseph Weizenbaum]]'s [[ELIZA]], which behaved as if it spoke English, without any formal knowledge at all, and [[Terry Winograd]]'s{{efn|Winograd also became a critic of early approaches to AI as well, arguing that intelligent machines could not be built using formal symbols exclusively, but required [[embodied cognition]].{{sfn|Winograd|Flores|1986}}}} [[SHRDLU]], which could successfully answer queries and carry out actions in a simplified world consisting of blocks and a robot arm.{{sfn|Crevier|1993|pp=84−102}}{{sfn|Russell|Norvig|2021|p=20}} SHRDLU, while successful, could not be scaled up into a useful natural language processing system, because it lacked a structured design. Maintaining a larger version of the program proved to be impossible, i.e. it was too scruffy to be extended. |
|||
To a neat, scruffy methods appear promiscuous, successful only by accident and unlikely to produce insights about how intelligence actually works. To a scruffy, neat methods appear to be hung up on [[formalism]] and to be too slow, [[fragile]] or boring to be applied to real systems. |
|||
Other AI laboratories (of which the largest were [[Stanford]], [[Carnegie Mellon University]] and the [[University of Edinburgh]]) focused on logic and formal problem solving as a basis for AI. These institutions supported the work of John McCarthy, Herbert Simon, Allen Newell, [[Donald Michie]], Robert Kowalski, and other "neats". |
|||
The contrast between [[MIT]]'s approach and other laboratories was also described as a "procedural/declarative distinction". Programs like SHRDLU were designed as agents that carried out actions. They executed "procedures". Other programs were designed as inference engines that manipulated formal statements (or "declarations") about the world and translated these manipulations into actions. |
|||
This conflict goes much deeper than [[programming]] practices, (though it clearly has parallels in [[software engineering]]). For [[philosophical]] or possibly [[scientific]] [[reason (logic)|reasons]], some people believe that [[human]] [[intelligence (trait)|intelligence]] is fundamentally [[rational]], and can best be represented by [[logical]] systems incorporating [[truth maintenance]]. Others believe that human intelligence consists of a mass of [[learned]] or [[evolved]] [[hack]]s, not necessarily having [[internal consistency]] or any unifying organizational framework. Ironically, this apparently scruffy philosophy may also turn out to be provably [[Optimization (mathematics)|optimal]], because intelligence is a form of [[search]], and as such cannot generally be solved perfectly in a reasonable amount of time (see also [[NP (complexity)|NP]] and Simple [[Heuristics]] (Gigerenzer & Todd, 1999), [[commonsense reasoning]], [[memetics]], [[reactive planning]]). |
|||
In his 1983 presidential address to [[Association for the Advancement of Artificial Intelligence]], [[Nils Nilsson (researcher)|Nils Nilsson]] discussed the issue, arguing that "the field needed both". He wrote "much of the knowledge we want our programs to have can and should be represented declaratively in some kind of declarative, logic-like formalism. Ad hoc structures have their place, but most of these come from the domain itself." Alex P. Pentland and Martin Fischler of [[SRI International]] concurred about the anticipated role of deduction and logic-like formalisms in future AI research, but not to the extent that Nilsson described.<ref>Pentland and Fischler 1983, quoted in {{Harvnb|McCorduck|2004|pp=421–424}}</ref> |
|||
== Relation to Human Intelligence == |
|||
==Scruffy projects in the 1980s== |
|||
To further tangle the issue, some AI practitioners do not care about [[human]] intelligence. Some writers have equated caring about human intelligence with being neat, but this is probably because they conform to the rational view of human intelligence described earlier. The famous neat [[John McCarthy (computer scientist)|John McCarthy]] has said publicly he has no interest in how human intelligence works, while famous scruffy [[Rodney Brooks]] is openly obsessed with creating [[humanoid intelligence]] (Brooks 2001). |
|||
The scruffy approach was applied to robotics by [[Rodney Brooks]] in the mid-1980s. He advocated building robots that were, as he put it, [[Fast, Cheap and Out of Control]], the title of a 1989 paper co-authored with Anita Flynn. Unlike earlier robots such as [[Shakey the robot|Shakey]] or the Stanford cart, they did not build up representations of the world by analyzing visual information with algorithms drawn from mathematical [[machine learning]] techniques, and they did not plan their actions using formalizations based on logic, such as the '[[Planner programming language|Planner]]' language. They simply reacted to their sensors in a way that tended to help them survive and move.{{sfn|McCorduck|2004|pp=454–459}} |
|||
== Well-Known Neats and Scruffies == |
|||
[[Douglas Lenat]]'s [[Cyc]] project was [[Cyc#Overview|initiated in 1984]]{{Broken anchor|date=2024-12-16|bot=User:Cewbot/log/20201008/configuration|target_link=Cyc#Overview|reason= The anchor (Overview) [[Special:Diff/1241345677|has been deleted]].}} one of earliest and most ambitious projects to capture all of human knowledge in machine readable form, is "a determinedly scruffy enterprise".{{sfn|McCorduck|2004|p=489}} The Cyc database contains millions of facts about all the complexities of the world, each of which must be entered one at a time, by knowledge engineers. Each of these entries is an ad hoc addition to the intelligence of the system. While there may be a "neat" solution to the problem of commonsense knowledge (such as machine learning algorithms with natural language processing that could study the text available over the internet), no such project has yet been successful. |
|||
== The Society of Mind == |
|||
{{Main|Society of Mind}} |
|||
In 1986 [[Marvin Minsky]] published ''The Society of Mind'' which advocated a view of [[intelligence]] and the [[mind]] as an interacting community of [[modularity|modules]] or [[intelligent agent|agents]] that each handled different aspects of cognition, where some modules were specialized for very specific tasks (e.g. [[edge detection]] in the visual cortex) and other modules were specialized to manage communication and prioritization (e.g. [[planning]] and [[attention]] in the frontal lobes). Minsky presented this paradigm as a model of both biological human intelligence and as a blueprint for future work in AI. |
|||
This paradigm is explicitly "scruffy" in that it does not expect there to be a single algorithm that can be applied to all of the tasks involved in intelligent behavior.{{sfn|Crevier|1993|p=254}} Minsky wrote: |
|||
{{quote|What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle.{{sfn|Minsky|1986|p=308}}}} |
|||
As of 1991, Minsky was still publishing papers evaluating the relative advantages of the neat versus scruffy approaches, e.g. “Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus |
|||
Scruffy”.{{sfn|Lehnert|1994}} |
|||
== Modern AI as both neat and scruffy == |
|||
New [[Artificial intelligence#Statistical|statistical]] and mathematical approaches to AI were developed in the 1990s, using highly developed formalisms such as [[optimization (mathematics)|mathematical optimization]] and [[Artificial neural network|neural networks]]. [[Pamela McCorduck]] wrote that "As I write, AI enjoys a Neat hegemony, people who believe that machine intelligence, at least, is best expressed in logical, even mathematical terms."{{sfn|McCorduck|2004|p=487}} This general trend towards more formal methods in AI was described as "the victory of the neats" by [[Peter Norvig]] and [[Stuart J. Russell|Stuart Russell]] in 2003.{{sfn|Russell|Norvig|2003|p=25−26}} |
|||
However, by 2021, Russell and Norvig had changed their minds.{{sfn|Russell|Norvig|2021|p=23}} Deep learning networks and machine learning in general require extensive fine tuning -- they must be iteratively tested until they begin to show the desired behavior. This is a scruffy methodology. |
|||
==Well-known examples== |
|||
<!-- See citations in the "history" section --> |
|||
'''Neats''' |
'''Neats''' |
||
* [[John McCarthy (computer scientist)|John McCarthy]] |
* [[John McCarthy (computer scientist)|John McCarthy]] |
||
* [[Allen Newell]] |
|||
* [[Herbert A. Simon]] |
|||
* [[Edward Feigenbaum]] |
|||
* [[Robert Kowalski]] |
|||
* [[Judea Pearl]] |
* [[Judea Pearl]] |
||
* [[David McAllester]] |
|||
* [[Daphne Koller]] |
|||
'''Scruffies''' |
'''Scruffies''' |
||
* [[Rodney Brooks]] |
* [[Rodney Brooks]] |
||
* [[Terry Winograd]] |
|||
* [[Marvin Minsky]] |
* [[Marvin Minsky]] |
||
* [[Roger Schank]] |
|||
* [[Douglas Lenat|Doug Lenat]] |
|||
== |
==See also== |
||
* [[History of artificial intelligence]] |
|||
* {{cite journal |
|||
* [[Soft computing]] |
|||
* [[Symbolic AI]] |
|||
* [[Philosophy of artificial intelligence]] |
|||
==Notes== |
|||
{{notelist}} |
|||
==Citations== |
|||
{{reflist}} |
|||
==References== |
|||
* {{cite book |last1=Brockman |first1=John |title=Third Culture: Beyond the Scientific Revolution |date=7 May 1996 |publisher=Simon and Schuster |url=https://www.amazon.com/exec/obidos/ASIN/0684823446/qid=913732847/sr=1-3/002-6796862-3062667 |access-date=2 August 2021 |language=en}} |
|||
* {{Crevier 1993}}. |
|||
* {{cite book |
|||
| last = Domingos | first = Pedro | author-link = Pedro Domingos |
|||
| title = The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World |
|||
| date = 22 September 2015 |
|||
| publisher = [[Basic Books]] |
|||
| isbn = 978-0465065707 |
|||
}} |
|||
* {{cite book |
|||
|last=Lehnert|first=Wendy C. |
|||
|editor1-last=Schank |editor1-first=Robert |editor2-last=Langer |editor2-first=Ellen |
|||
|title=Beliefs, Reasoning, and Decision Making: Psycho-Logic in Honor of Bob Abelson |
|||
|date=1 May 1994 |
|||
|publisher=Taylor & Francis Group |location=New York, NY |page=150 |edition=First |url=https://www.taylorfrancis.com/books/edit/10.4324/9780203773574/beliefs-reasoning-decision-making-roger-schank-ellen-langer |access-date=2 August 2021 |
|||
|language=en |
|||
|chapter=5: Cognition, Computers, and Car Bombs: How Yale Prepared Me for the 90’s|doi=10.4324/9780203773574 |
|||
|isbn=9781134781621 |
|||
}} |
|||
* {{cite book |
|||
|last1=Minsky|first1=Marvin |
|||
|title=The Society of Mind |
|||
|date=1986 |
|||
|publisher=Simon & Schuster|location=New York|isbn=0-671-60740-5 |
|||
|url=https://archive.org/details/societyofmind00marv}} |
|||
* {{McCorduck 2004}}. |
|||
<!-- Note: this article needs both new and old editions of R&N, because it discusses how they changed their minds --> |
|||
* {{Cite book |
|||
| first1 = Stuart J. | last1 = Russell | author1-link=Stuart J. Russell |
|||
| first2 = Peter | last2 = Norvig | author2-link=Peter Norvig |
|||
| title = Artificial Intelligence: A Modern Approach |
|||
| year = 2003 |
|||
| edition = 2nd |
|||
| publisher = Prentice Hall | publication-place = Upper Saddle River, New Jersey |
|||
| isbn = 0-13-790395-2 |
|||
}} |
|||
* {{Cite book |
|||
| first1 = Stuart J. | last1 = Russell | author1-link=Stuart J. Russell |
|||
| first2 = Peter | last2 = Norvig | author2-link=Peter Norvig |
|||
| title =[[Artificial Intelligence: A Modern Approach]] |
|||
| year = 2021 |
|||
| edition = 4th |
|||
| isbn = 9780134610993 |
|||
| lccn = 20190474 |
|||
| publisher = Pearson | location = Hoboken |
|||
}} |
|||
* {{cite book |
|||
| last1 = Winograd | first1 = Terry |
|||
| last2 = Flores |
|||
| year = 1986 |
|||
| title = Understanding Computers and Cognition: A New Foundation for Design |
|||
| publisher = Ablex Publ Corp}} |
|||
==Further reading== |
|||
* {{cite journal | ref=none |
|||
| doi = 10.1207/s15516709cog0000_22 |
|||
| pmid = 21702777 |
|||
| first = John R. |
| first = John R. |
||
| last = Anderson |
| last = Anderson |
||
Line 42: | Line 141: | ||
| volume = 29 |
| volume = 29 |
||
| issue = 3 |
| issue = 3 |
||
| pages = |
| pages = 313–341 |
||
| doi-access = free |
|||
}} |
|||
}} |
|||
* {{cite journal |
|||
* {{cite journal | ref=none |
|||
| first = Rodney A. |
| first = Rodney A. |
||
| last = Brooks |
| last = Brooks |
||
Line 52: | Line 152: | ||
| date = 2001-01-18 |
| date = 2001-01-18 |
||
| pages = 409–411 |
| pages = 409–411 |
||
| doi = 10.1038/35053196 |
|||
}} |
|||
| pmid = 11201756 |
|||
* {{cite book |
|||
| |
| issue = 6818 |
||
| bibcode = 2001Natur.409..409B |
|||
| last = Gigerenzer |
|||
| s2cid = 4430614 |
|||
| coauthors = Todd, Peter M.; the ABC Research Group |
|||
| |
| doi-access = free |
||
}} |
|||
| title = Simple Heuristics That Make Us Smart |
|||
| publisher = Oxford University Press |
|||
}} |
|||
* {{cite book |
|||
| first = Alan |
|||
| last = Newell |
|||
| title = Unified Theories of Cognition |
|||
| publisher = Harvard University Press |
|||
| location = Cambridge, Massachusetts |
|||
| year = 1990 |
|||
}} |
|||
{{DEFAULTSORT:Neats Vs. Scruffies}} |
|||
---- |
|||
[[Category:Philosophy of artificial intelligence]] |
|||
{{FOLDOC}} |
|||
[[Category:Scientific comparison]] |
|||
[[Category:Artificial intelligence researchers]] |
|||
[[Category:Artificial intelligence]] |
Latest revision as of 00:00, 16 December 2024
In the history of artificial intelligence (AI), neat and scruffy are two contrasting approaches to AI research. The distinction was made in the 1970s, and was a subject of discussion until the mid-1980s.[1][2][3]
"Neats" use algorithms based on a single formal paradigm, such as logic, mathematical optimization, or neural networks. Neats verify their programs are correct via rigorous mathematical theory. Neat researchers and analysts tend to express the hope that this single formal paradigm can be extended and improved in order to achieve general intelligence and superintelligence.
"Scruffies" use any number of different algorithms and methods to achieve intelligent behavior, and rely on incremental testing to verify their programs. Scruffy programming requires large amounts of hand coding and knowledge engineering. Scruffy experts have argued that general intelligence can only be implemented by solving a large number of essentially unrelated problems, and that there is no silver bullet that will allow programs to develop general intelligence autonomously.
John Brockman compares the neat approach to physics, in that it uses simple mathematical models as its foundation. The scruffy approach is more biological, in that much of the work involves studying and categorizing diverse phenomena.[a]
Modern AI has elements of both scruffy and neat approaches. Scruffy AI researchers in the 1990s applied mathematical rigor to their programs, as neat experts did.[5][6] They also express the hope that there is a single paradigm (a "master algorithm") that will cause general intelligence and superintelligence to emerge.[7] But modern AI also resembles the scruffies:[8] modern machine learning applications require a great deal of hand-tuning and incremental testing; while the general algorithm is mathematically rigorous, accomplishing the specific goals of a particular application is not. Also, in the early 2000s, the field of software development embraced extreme programming, which is a modern version of the scruffy methodology: try things and test them, without wasting time looking for more elegant or general solutions.
Origin in the 1970s
[edit]The distinction between neat and scruffy originated in the mid-1970s, by Roger Schank. Schank used the terms to characterize the difference between his work on natural language processing (which represented commonsense knowledge in the form of large amorphous semantic networks) from the work of John McCarthy, Allen Newell, Herbert A. Simon, Robert Kowalski and others whose work was based on logic and formal extensions of logic.[2] Schank described himself as an AI scruffy. He made this distinction in linguistics, arguing strongly against Chomsky's view of language.[a]
The distinction was also partly geographical and cultural: "scruffy" attributes were exemplified by AI research at MIT under Marvin Minsky in the 1970s. The laboratory was famously "freewheeling" and researchers often developed AI programs by spending long hours fine-tuning programs until they showed the required behavior. Important and influential "scruffy" programs developed at MIT included Joseph Weizenbaum's ELIZA, which behaved as if it spoke English, without any formal knowledge at all, and Terry Winograd's[b] SHRDLU, which could successfully answer queries and carry out actions in a simplified world consisting of blocks and a robot arm.[10][11] SHRDLU, while successful, could not be scaled up into a useful natural language processing system, because it lacked a structured design. Maintaining a larger version of the program proved to be impossible, i.e. it was too scruffy to be extended.
Other AI laboratories (of which the largest were Stanford, Carnegie Mellon University and the University of Edinburgh) focused on logic and formal problem solving as a basis for AI. These institutions supported the work of John McCarthy, Herbert Simon, Allen Newell, Donald Michie, Robert Kowalski, and other "neats".
The contrast between MIT's approach and other laboratories was also described as a "procedural/declarative distinction". Programs like SHRDLU were designed as agents that carried out actions. They executed "procedures". Other programs were designed as inference engines that manipulated formal statements (or "declarations") about the world and translated these manipulations into actions.
In his 1983 presidential address to Association for the Advancement of Artificial Intelligence, Nils Nilsson discussed the issue, arguing that "the field needed both". He wrote "much of the knowledge we want our programs to have can and should be represented declaratively in some kind of declarative, logic-like formalism. Ad hoc structures have their place, but most of these come from the domain itself." Alex P. Pentland and Martin Fischler of SRI International concurred about the anticipated role of deduction and logic-like formalisms in future AI research, but not to the extent that Nilsson described.[12]
Scruffy projects in the 1980s
[edit]The scruffy approach was applied to robotics by Rodney Brooks in the mid-1980s. He advocated building robots that were, as he put it, Fast, Cheap and Out of Control, the title of a 1989 paper co-authored with Anita Flynn. Unlike earlier robots such as Shakey or the Stanford cart, they did not build up representations of the world by analyzing visual information with algorithms drawn from mathematical machine learning techniques, and they did not plan their actions using formalizations based on logic, such as the 'Planner' language. They simply reacted to their sensors in a way that tended to help them survive and move.[13]
Douglas Lenat's Cyc project was initiated in 1984[broken anchor] one of earliest and most ambitious projects to capture all of human knowledge in machine readable form, is "a determinedly scruffy enterprise".[14] The Cyc database contains millions of facts about all the complexities of the world, each of which must be entered one at a time, by knowledge engineers. Each of these entries is an ad hoc addition to the intelligence of the system. While there may be a "neat" solution to the problem of commonsense knowledge (such as machine learning algorithms with natural language processing that could study the text available over the internet), no such project has yet been successful.
The Society of Mind
[edit]In 1986 Marvin Minsky published The Society of Mind which advocated a view of intelligence and the mind as an interacting community of modules or agents that each handled different aspects of cognition, where some modules were specialized for very specific tasks (e.g. edge detection in the visual cortex) and other modules were specialized to manage communication and prioritization (e.g. planning and attention in the frontal lobes). Minsky presented this paradigm as a model of both biological human intelligence and as a blueprint for future work in AI.
This paradigm is explicitly "scruffy" in that it does not expect there to be a single algorithm that can be applied to all of the tasks involved in intelligent behavior.[15] Minsky wrote:
What magical trick makes us intelligent? The trick is that there is no trick. The power of intelligence stems from our vast diversity, not from any single, perfect principle.[16]
As of 1991, Minsky was still publishing papers evaluating the relative advantages of the neat versus scruffy approaches, e.g. “Logical Versus Analogical or Symbolic Versus Connectionist or Neat Versus Scruffy”.[17]
Modern AI as both neat and scruffy
[edit]New statistical and mathematical approaches to AI were developed in the 1990s, using highly developed formalisms such as mathematical optimization and neural networks. Pamela McCorduck wrote that "As I write, AI enjoys a Neat hegemony, people who believe that machine intelligence, at least, is best expressed in logical, even mathematical terms."[6] This general trend towards more formal methods in AI was described as "the victory of the neats" by Peter Norvig and Stuart Russell in 2003.[18]
However, by 2021, Russell and Norvig had changed their minds.[19] Deep learning networks and machine learning in general require extensive fine tuning -- they must be iteratively tested until they begin to show the desired behavior. This is a scruffy methodology.
Well-known examples
[edit]Neats
Scruffies
See also
[edit]Notes
[edit]- ^ a b John Brockman writes "Chomsky has always adopted the physicist's philosophy of science, which is that you have hypotheses you check out, and that you could be wrong. This is absolutely antithetical to the AI philosophy of science, which is much more like the way a biologist looks at the world. The biologist's philosophy of science says that human beings are what they are, you find what you find, you try to understand it, categorize it, name it, and organize it. If you build a model and it doesn't work quite right, you have to fix it. It's much more of a "discovery" view of the world."[4]
- ^ Winograd also became a critic of early approaches to AI as well, arguing that intelligent machines could not be built using formal symbols exclusively, but required embodied cognition.[9]
Citations
[edit]- ^ McCorduck 2004, pp. 421–424, 486–489.
- ^ a b Crevier 1993, p. 168.
- ^ Nilsson 1983, pp. 10–11.
- ^ Brockman 1996, Chapter 9: Information is Surprises.
- ^ Russell & Norvig 2021, p. 24.
- ^ a b McCorduck 2004, p. 487.
- ^ Domingos 2015.
- ^ Russell & Norvig 2021, p. 26.
- ^ Winograd & Flores 1986.
- ^ Crevier 1993, pp. 84−102.
- ^ Russell & Norvig 2021, p. 20.
- ^ Pentland and Fischler 1983, quoted in McCorduck 2004, pp. 421–424
- ^ McCorduck 2004, pp. 454–459.
- ^ McCorduck 2004, p. 489.
- ^ Crevier 1993, p. 254.
- ^ Minsky 1986, p. 308.
- ^ Lehnert 1994.
- ^ Russell & Norvig 2003, p. 25−26.
- ^ Russell & Norvig 2021, p. 23.
References
[edit]- Brockman, John (7 May 1996). Third Culture: Beyond the Scientific Revolution. Simon and Schuster. Retrieved 2 August 2021.
- Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. ISBN 0-465-02997-3..
- Domingos, Pedro (22 September 2015). The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books. ISBN 978-0465065707.
- Lehnert, Wendy C. (1 May 1994). "5: Cognition, Computers, and Car Bombs: How Yale Prepared Me for the 90's". In Schank, Robert; Langer, Ellen (eds.). Beliefs, Reasoning, and Decision Making: Psycho-Logic in Honor of Bob Abelson (First ed.). New York, NY: Taylor & Francis Group. p. 150. doi:10.4324/9780203773574. ISBN 9781134781621. Retrieved 2 August 2021.
- Minsky, Marvin (1986). The Society of Mind. New York: Simon & Schuster. ISBN 0-671-60740-5.
- McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, Massachusetts: A. K. Peters, ISBN 1-5688-1205-1.
- Russell, Stuart J.; Norvig, Peter (2003). Artificial Intelligence: A Modern Approach (2nd ed.). Upper Saddle River, New Jersey: Prentice Hall. ISBN 0-13-790395-2.
- Russell, Stuart J.; Norvig, Peter (2021). Artificial Intelligence: A Modern Approach (4th ed.). Hoboken: Pearson. ISBN 9780134610993. LCCN 20190474.
- Winograd, Terry; Flores (1986). Understanding Computers and Cognition: A New Foundation for Design. Ablex Publ Corp.
Further reading
[edit]- Anderson, John R. (2005). "Human symbol manipulation within an integrated cognitive architecture". Cognitive Science. 29 (3): 313–341. doi:10.1207/s15516709cog0000_22. PMID 21702777.
- Brooks, Rodney A. (2001-01-18). "The Relationship Between Matter and Life". Nature. 409 (6818): 409–411. Bibcode:2001Natur.409..409B. doi:10.1038/35053196. PMID 11201756. S2CID 4430614.