Neats and scruffies: Difference between revisions
No edit summary |
added cleanup tag -- importance |
||
Line 1: | Line 1: | ||
{{importance}} |
|||
In [[artificial intelligence]], the labels '''neats''' and '''scruffies''' are used to refer to one of the continuing [[holy war]]s in artificial intelligence research. |
In [[artificial intelligence]], the labels '''neats''' and '''scruffies''' are used to refer to one of the continuing [[holy war]]s in artificial intelligence research. |
||
Revision as of 18:58, 4 March 2006
In artificial intelligence, the labels neats and scruffies are used to refer to one of the continuing holy wars in artificial intelligence research.
As might be guessed from the terms, neats use formal methods – such as logic or pure applied statistics – exclusively. Scruffies are hackers, who will cobble together a system built of anything – even logic. Neats care whether their reasoning is both provably sound and complete and that their machine learning systems can be shown to converge in a known length of time. Scruffies would like their learning to converge too, but they are happier if empirical experience shows their systems working than to have mere equations and proofs showing that they ought to.
This conflict goes much deeper than programming practices, (though it clearly has parallels in software engineering.) For philosophical or possibly scientific reasons, some people believe that human intelligence is fundamentally rational, and can best be represented by logical systems incorporating truth maintenance. Others believe that human intelligence consists of a mass of learned or evolved hacks, not necessarily having internal consistency or any unifying organizational framework. Ironically, this apparently scruffy philosophy may also turn out to be provably optimal, because intelligence is a form of search, and as such cannot generally be solved perfectly in a reasonable amount of time (see also NP and Simple Heuristics (Gigerenzer & Todd, 1999), commonsense reasoning, memetics, reactive planning).
To further tangle the issue, some AI practitioners do not care about human intelligence. Some writers have equated caring about human intelligence with being neat, but this is probably because they conform to the rational view of human intelligence described earlier. The famous neat John McCarthy has said publicly he has no interest in how human intelligence works, while famous scruffy Rodney Brooks is openly obsessed with creating humanoid intelligence (Brooks 2001).
To a neat, scruffy methods appear promiscuous, successful only by accident and unlikely to produce insights about how intelligence actually works. To a scruffy, neat methods appear to be hung up on formalism and to be too slow, fragile or boring to be applied to real systems.
As is often the case with such wars, much success in AI has come from combining neat and scruffy approaches. For example, there are many cognitive models matching human psychological data built in Soar (Newell 1990) and ACT-R. Both of these systems have formal representations and execution systems, but the rules put into the systems to create the models are generated ad hoc.
References
- John R. Anderson (2005) Human symbol manipulation within an integrated cognitive architecture. Cognitive Science, 29(3), 313-341.
- Rodney A. Brooks, The Relationship Between Matter and Life", Nature, Vol. 409, January 18, 2001, pp. 409–411.
- Gerd Gigerenzer, Peter M. Todd, and the ABC Research Group (1999), Simple Heuristics That Make Us Smart, Oxford University Press.
- Alan Newell, Unified Theories of Cognition Harvard University Press, Cambridge, Massachusetts, 1990.
This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.