Neats and scruffies: Difference between revisions
Citation bot (talk | contribs) m Add: pmid. | You can use this bot yourself. Report bugs here. | User-activated. |
m remove links to deleted Portal:Artificial intelligencef |
||
Line 56: | Line 56: | ||
==See also== |
==See also== |
||
{{Portal|Artificial intelligence}} |
|||
* [[Mathematical rigour]] |
* [[Mathematical rigour]] |
||
* [[Reproducibility]] |
* [[Reproducibility]] |
Revision as of 19:03, 10 September 2019
Neat and scruffy are labels for two different types of artificial intelligence (AI) research. Neats consider that solutions should be elegant, clear and provably correct. Scruffies believe that intelligence is too complicated (or computationally intractable) to be solved with the sorts of homogeneous system such neat requirements usually mandate.
Much success in AI came from combining neat and scruffy approaches. For example, there are many cognitive models matching human psychological data built in Soar[1] and ACT-R. Both of these systems have formal representations and execution systems, but the rules put into the systems to create the models are generated ad hoc.
History
The distinction was originally made by Roger Schank in the mid-1970s to characterize the difference between his work on natural language processing (which represented commonsense knowledge in the form of large amorphous semantic networks) from the work of John McCarthy, Allen Newell, Herbert A. Simon, Robert Kowalski and others whose work was based on logic and formal extensions of logic.[2] Roger Schank actually notes that he originally made this distinction in linguistics, related to Chomskian vs. non-Chomskian, but discovered it works in AI too, and other areas.
The distinction was also partly geographical and cultural: "scruffy" was associated with AI research at MIT under Marvin Minsky in the 1960s. The laboratory was famously "freewheeling" and researchers often developed AI programs by spending long hours tweaking programs until they showed the required behavior. This practice was named "hacking" and the laboratory gave birth to the hacker culture.[3] Important and influential "scruffy" programs developed at MIT included Joseph Weizenbaum's[4] ELIZA, which behaved as if it spoke English, without any formal knowledge at all, and Terry Winograd's[5] SHRDLU, which could successfully answer queries and carry out actions in a simplified world consisting of blocks and a robot arm.[6] SHRDLU, while enormously successful, could not be scaled up into a useful natural language processing system, however, because it had no overarching design and maintaining a larger version of the program proved to be impossible; it was too scruffy to be extended.
Other AI laboratories (of which the largest were Stanford, Carnegie Mellon University and the University of Edinburgh) focused on logic and formal problem solving as a basis for AI. These institutions supported the work of John McCarthy, Herbert A. Simon, Allen Newell, Donald Michie, Robert Kowalski, and many other "neats".
The contrast between MIT's approach and other laboratories was also described as a "procedural/declarative distinction". Programs like SHRDLU were designed as agents that carried out actions; they executed "procedures". Other programs were designed as inference engines that manipulated formal statements (or "declarations") about the world and translated these manipulations into actions.
The debate reached its peak in the middle 1980s. Nils Nilsson in his presidential address to Association for the Advancement of Artificial Intelligence in 1983, discussed the issue, arguing that "the field needed both". He wrote "much of the knowledge we want our programs to have can and should be represented declaratively in some kind of declarative, logic like formalism. Ad hoc structures have their place, but most of these come from the domain itself."[7] Alex P. Pentland and Martin Fischler of MIT argued in response that "There is no question that deduction and logic-like formalisms will play an important role in AI research; however, it does not seem that they are up to the Royal role that Nils suggests. This pretender King, while not naked, appears to have a limited wardrobe."[8] Many other researchers also weighed in on one side or the other of the issue.
The scruffy approach was applied to robotics by Rodney Brooks in the middle 1980s. He advocated building robots that were, as he put it, Fast, Cheap and Out of Control (the title of a 1989 paper co-authored with Anita Flynn). Unlike earlier robots such as Shakey or the Stanford cart, they did not build up representations of the world by analyzing visual information with algorithms drawn from mathematical machine learning techniques, and they did not plan their actions using formalizations based on logic, such as the 'Planner' language. They simply reacted to their sensors in a way that tended to help them survive and move.[9]
Doug Lenat's Cyc project, one of the oldest and most ambitious projects to capture all of human knowledge in machine readable form, is "a determinedly scruffy enterprise" (according to Pamela McCorduck).[10] The Cyc database contains millions of facts about all the complexities of the world, each of which must be entered one at a time, by knowledge engineers. Each of these entries is an ad hoc addition to the intelligence of the system. While there may be a "neat" solution to the problem of commonsense knowledge (such as machine learning algorithms with natural language processing that could study the text available over the internet), no such project has yet been successful.
New statistical and mathematical approaches to AI were developed in the 1990s, using highly developed formalisms such as Bayesian nets and mathematical optimization. This general trend towards more formal methods in AI is described as "the victory of the neats" by Peter Norvig and Stuart Russell.[11] Pamela McCorduck, in 2004: "As I write, AI enjoys a Neat hegemony, people who believe that machine intelligence, at least, is best expressed in logical, even mathematical terms."[12] Neat solutions have been highly successful in the 21st century and are now used throughout the technology industry. These solutions, however, have mostly been applied to specific problems with specific solutions, and the problem of general intelligence remains unsolved.
The terms "neat" and "scruffy" are rarely used by AI researchers in the 21st century, although the issue remains unresolved. "Neat" solutions to problems such as machine learning and computer vision, have become indispensable throughout the technology industry,[11] but ad-hoc and detailed solutions still dominate research into robotics and commonsense knowledge.
Typical methodologies
As might be guessed from the terms, neats use formal methods – such as logic or pure applied statistics – exclusively. Scruffies are hackers, who will cobble together a system built of anything – even logic. Neats care whether their reasoning is both provably sound and complete and that their machine learning systems can be shown to converge in a known length of time. Scruffies would like their learning to converge too, but they are happier if empirical experience shows their systems working than to have mere equations and proofs showing that they ought to.
To a neat, scruffy methods appear promiscuous, successful only by accident and unlikely to produce insights about how intelligence actually works. To a scruffy, neat methods appear to be hung up on formalism and to be too slow, fragile or boring to be applied to real systems.
Relation to philosophy and human intelligence
This conflict goes much deeper than computer programming practices, (though it clearly has parallels in software engineering). For philosophical or possibly scientific reasons, some people believe that intelligence is fundamentally rational, and can best be represented by logical systems incorporating truth maintenance. Others believe that intelligence is best implemented as a mass of learned or evolved hacks, not necessarily having internal consistency or any unifying organizational framework.
The apparently scruffy philosophy may also turn out to be provably (under typical assumptions) optimal for many applications.[citation needed] Intelligence is often seen as a form of search,[13] and as such not believed to be perfectly solvable in a reasonable amount of time (see also NP and Simple Heuristics,[14] commonsense reasoning, memetics, reactive planning).
It is an open question whether human intelligence is inherently scruffy or neat. Some claim that the question itself is unimportant: the famous neat John McCarthy has said publicly he has no interest in how human intelligence works[citation needed], while famous scruffy Rodney Brooks is openly obsessed with creating humanoid intelligence. (See Subsumption architecture, Cog project (Brooks 2001)).
Well-known examples
Neats
Scruffies
See also
Notes
- ^ Newell 1990
- ^ Crevier 1993, p. 168
- ^ Crevier 1993, pp. 68−71
- ^ Weizenbaum would become a critic of AI, and would specifically single out the practice of "hacking" as "pathological". McCorduck 2004, pp. 374–376
- ^ Winograd also became a critic of early approaches to AI as well, arguing that intelligent machines could not be built using formal symbols exclusively, but required embodied cognition. (Winograd 1986)
- ^ McCorduck 2004, pp. 300–305, Crevier 1993, pp. 84−102, Russell & Norvig 2003, p. 19
- ^ Nils Nilsson, presidential address to AAAI in 1983 quoted in McCorduck 2004, pp. 421–422.
- ^ Pentland and Fischler 1983, quoted in McCorduck 2004, pp. 423–424
- ^ McCorduck 2004, pp. 454–459
- ^ McCorduck 2004, p. 489
- ^ a b Russell & Norvig 2003, p. 25−26
- ^ McCorduck 2004, p. 487
- ^ Winston 1992
- ^ Gigerenzer & Todd 1999
References
- Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. ISBN 0-465-02997-3.
- Newell, Allen (1990). Unified Theories of Cognition. Cambridge, Massachusetts: Harvard University Press.
{{cite book}}
: Invalid|ref=harv
(help) - McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, Massachusetts: A. K. Peters, ISBN 1-5688-1205-1.
- Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2.
- Winston, Patrick (1992). Artificial Intelligence. Addison Wesley. ISBN 978-0-201-53377-4.
{{cite book}}
: Invalid|ref=harv
(help) - Gigerenzer, Gerd; Todd, Peter M.; et al. (ABC Research Group) (1999). Simple Heuristics That Make Us Smart. Oxford University Press. ISBN 9780199729241.
{{cite book}}
: Invalid|ref=harv
(help)
Further reading
- Anderson, John R. (2005). "Human symbol manipulation within an integrated cognitive architecture". Cognitive Science. 29 (3): 313–341. doi:10.1207/s15516709cog0000_22. PMID 21702777.
- Brooks, Rodney A. (2001-01-18). "The Relationship Between Matter and Life". Nature. 409 (6818): 409–411. doi:10.1038/35053196. PMID 11201756.
- This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.