Neats and scruffies: Difference between revisions
m Recategorized and other changes |
No edit summary |
||
Line 1: | Line 1: | ||
In [[artificial intelligence]], the labels '''neats''' and '''scruffies''' used to refer to one of the continuing [[holy war]]s in artificial intelligence research. |
In [[artificial intelligence]], the labels '''neats''' and '''scruffies''' are used to refer to one of the continuing [[holy war]]s in artificial intelligence research. |
||
As might be guessed from the terms, ''neats'' exclusively use [[formal methods]] such as [[logic]] or pure applied [[statistics]]. ''Scruffies'' are [[hackers]], they will cobble together a system built of anything (even [[logic]]). Neats care whether their reasoning is [[provably]] [[sound]] and [[complete]], that their [[machine learning]] systems can be shown to [[converge]] in a known length of time. Scruffies would like their learning to converge too, but they are happier if [[empirical]] experience shows their systems working than to have equations showing that they ought to. |
|||
This conflict goes much deeper than [[programming]] practices, (though it clearly has parallels in [[software engineering]].) For [[philisophical]] or possibly [[scientific]] [[reasons]], some people believe that [[human]] [[intelligence]] is fundamentally [[rational]], and can best be represented by [[logical]] systems incorporating [[truth maintenance]]. Others believe that human intelligence consists of a mass of [[learned]] or [[evolved]] [[hacks]], not necessarily having [[internal consistancy]] or any unifying organizational framework. Ironically, this apparently scruffy philosophy may also turn out to be provably [[optimal]], because intelligence is a form of [[search]], and as such cannot generally solved in a reasonable amount of time (see also [[NP]] and Simple [[Heuristics]] (Gigerenzer & Todd, 1999)). |
|||
To further tangle the issue, some AI practitioners don't care about human intelligence at all. Some writers have equated caring with human intelligence with being neat, but this is probably because they conform to the rational view of human intelligence described earlier. In fact, famous neat [[John Mccarthy]] has publicly said he has no interest in how human intelligence works, while famous scruffy [[Rodney Brooks]] is openly obsessed with creating [[humanoid intelligence]] |
|||
This conflict tangles together two separate issues. One is the relationship between human reasoning and AI; "neats" tend to try to build systems that "reason" in some way identifiably similar to the way humans report themselves as doing, while "scruffies" profess not to care whether an [[algorithm]] resembles human [[reasoning]] in the least, as long as it works. |
This conflict tangles together two separate issues. One is the relationship between human reasoning and AI; "neats" tend to try to build systems that "reason" in some way identifiably similar to the way humans report themselves as doing, while "scruffies" profess not to care whether an [[algorithm]] resembles human [[reasoning]] in the least, as long as it works. |
||
More importantly, neats tend to believe that [[logic]] is king, while scruffies favour looser, more ad-hoc methods driven by [[empirical]] knowledge. To a neat, scruffy methods appear promiscuous, successful only by accident and not productive of insights about how intelligence actually works; to a scruffy, neat methods appear to be hung up on [[formalism]] and irrelevant to the hard-to-capture "common sense" of living intelligences. |
More importantly, neats tend to believe that [[logic]] is king, while scruffies favour looser, more ad-hoc methods driven by [[empirical]] knowledge. To a neat, scruffy methods appear promiscuous, successful only by accident and not productive of insights about how intelligence actually works; to a scruffy, neat methods appear to be hung up on [[formalism]] and irrelevant to the hard-to-capture "common sense" of living intelligences. |
||
== References == |
|||
* Gerd Gigerenzer, Peter M. Todd, and the ABC Research Group (1999), ''Simple Heuristics That Make Us Smart'', Oxford University Press. |
|||
---- |
---- |
Revision as of 21:41, 17 December 2005
In artificial intelligence, the labels neats and scruffies are used to refer to one of the continuing holy wars in artificial intelligence research.
As might be guessed from the terms, neats exclusively use formal methods such as logic or pure applied statistics. Scruffies are hackers, they will cobble together a system built of anything (even logic). Neats care whether their reasoning is provably sound and complete, that their machine learning systems can be shown to converge in a known length of time. Scruffies would like their learning to converge too, but they are happier if empirical experience shows their systems working than to have equations showing that they ought to.
This conflict goes much deeper than programming practices, (though it clearly has parallels in software engineering.) For philisophical or possibly scientific reasons, some people believe that human intelligence is fundamentally rational, and can best be represented by logical systems incorporating truth maintenance. Others believe that human intelligence consists of a mass of learned or evolved hacks, not necessarily having internal consistancy or any unifying organizational framework. Ironically, this apparently scruffy philosophy may also turn out to be provably optimal, because intelligence is a form of search, and as such cannot generally solved in a reasonable amount of time (see also NP and Simple Heuristics (Gigerenzer & Todd, 1999)).
To further tangle the issue, some AI practitioners don't care about human intelligence at all. Some writers have equated caring with human intelligence with being neat, but this is probably because they conform to the rational view of human intelligence described earlier. In fact, famous neat John Mccarthy has publicly said he has no interest in how human intelligence works, while famous scruffy Rodney Brooks is openly obsessed with creating humanoid intelligence
This conflict tangles together two separate issues. One is the relationship between human reasoning and AI; "neats" tend to try to build systems that "reason" in some way identifiably similar to the way humans report themselves as doing, while "scruffies" profess not to care whether an algorithm resembles human reasoning in the least, as long as it works.
More importantly, neats tend to believe that logic is king, while scruffies favour looser, more ad-hoc methods driven by empirical knowledge. To a neat, scruffy methods appear promiscuous, successful only by accident and not productive of insights about how intelligence actually works; to a scruffy, neat methods appear to be hung up on formalism and irrelevant to the hard-to-capture "common sense" of living intelligences.
References
- Gerd Gigerenzer, Peter M. Todd, and the ABC Research Group (1999), Simple Heuristics That Make Us Smart, Oxford University Press.
This article is based on material taken from the Free On-line Dictionary of Computing prior to 1 November 2008 and incorporated under the "relicensing" terms of the GFDL, version 1.3 or later.