Symbol grounding problem: Difference between revisions
→Brentano's notion of intentionality: looks like or to me in connection with this topic |
m task, replaced: Journal of Logic, Language, and Information → Journal of Logic, Language and Information |
||
(47 intermediate revisions by 11 users not shown) | |||
Line 1: | Line 1: | ||
{{Use dmy dates|date=February 2020}} |
|||
{{Multiple issues| |
|||
{{COI|date=September 2014}} |
{{COI|date=September 2014}} |
||
The '''symbol grounding problem''' is a concept in the fields of [[artificial intelligence]], [[cognitive science]], [[philosophy of mind]], and [[semantics]]. It addresses the challenge of connecting symbols, such as words or abstract representations, to the real-world objects or concepts they refer to. In essence, it is about how symbols acquire meaning in a way that is tied to the physical world. It is concerned with how it is that [[word]]s ([[symbol]]s in general) get their [[Meaning (psychology)|meanings]],<ref>Vogt, Paul. "[http://ilk.uvt.nl/~pvogt/publications/acsChapter.pdf Language evolution and robotics: issues on symbol grounding and language acquisition]." Artificial cognition systems. IGI Global, 2007. 176–209.</ref> and hence is closely related to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of how it is that [[mental state]]s are meaningful, and hence to the [[Hard problem of consciousness|problem of consciousness]]: what is the connection between certain physical systems and the contents of subjective experiences. |
|||
{{More footnotes|date=March 2013}} |
|||
{{Unbalanced|date=December 2010}} |
|||
}} |
|||
{{Use dmy dates|date=February 2020}} |
|||
==Definitions== |
|||
The '''symbol grounding problem''' is the problem of how "...symbol meaning..." is "...to be grounded in something other than just more meaningless symbols" [Harnad, S. (1990)]. This problem is of significant importance in the realms of philosophy, cognition, and language. |
|||
In [[cognitive science]] and [[semantics]], the '''symbol grounding problem''' is concerned with how it is that [[word]]s ([[symbol]]s in general) get their [[Meaning (psychology)|meanings]],<ref>Vogt, Paul. "[http://ilk.uvt.nl/~pvogt/publications/acsChapter.pdf Language evolution and robotics: issues on symbol grounding and language acquisition]." Artificial cognition systems. IGI Global, 2007. 176–209.</ref> and hence is closely related to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of how it is that [[mental state]]s are meaningful, hence to the [[Hard problem of consciousness|problem of consciousness]]: what is the connection between certain physical systems and the contents of subjective experiences. |
|||
==Related definitions== |
|||
=== The symbol grounding problem === |
=== The symbol grounding problem === |
||
According to his 1990 paper, Harnad implicitly expresses a few other definitions of the symbol grounding problem: |
According to his 1990 paper, [[Stevan Harnad]] implicitly expresses a few other definitions of the symbol grounding problem:{{sfn|Harnad|1990}} |
||
# The symbol grounding problem is the problem of how to make the "...semantic interpretation of a formal symbol system..." "... intrinsic to the system, rather than just parasitic on the meanings in our heads..." "...in anything but other meaningless symbols..." |
# The symbol grounding problem is the problem of how to make the "...semantic interpretation of a formal symbol system..." "... intrinsic to the system, rather than just parasitic on the meanings in our heads..." "...in anything but other meaningless symbols..." |
||
# The symbol grounding problem is the problem of how "...the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes..." can be grounded "...in anything but other meaningless symbols." |
# The symbol grounding problem is the problem of how "...the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes..." can be grounded "...in anything but other meaningless symbols." |
||
# "...the symbol grounding problem is referred to as the problem of intrinsic meaning (or 'intentionality') in Searle's (1980) celebrated 'Chinese Room Argument |
# "...the symbol grounding problem is referred to as the problem of intrinsic meaning (or 'intentionality') in [[John Searle|Searle]]'s (1980) celebrated '[[Chinese Room Argument]]'" |
||
# The symbol grounding problem is the problem of how you can "...ever get off the symbol/symbol merry-go-round..." |
# The symbol grounding problem is the problem of how you can "...ever get off the symbol/symbol merry-go-round..." |
||
To answer the question of whether or not groundedness is a necessary condition for meaning, a formulation of the symbol grounding problem is required: The symbol grounding problem is the problem of how to make the "...semantic interpretation of a formal symbol system..." "... intrinsic to the system, rather than just parasitic on the meanings in our heads..." "...in anything but other meaningless symbols".{{sfn|Harnad|1990}} |
|||
=== Symbol system === |
=== Symbol system === |
||
According to his 1990 paper, Harnad lays out the definition of a "symbol system" relative to his defined symbol grounding problem. As defined by Harnad, a "symbol system" is "...a set of arbitrary 'physical tokens' scratches on paper, holes on a tape, events in a digital computer, etc. that are ... manipulated on the basis of 'explicit rules' that are ... likewise physical tokens and strings of tokens." |
According to his 1990 paper, Harnad lays out the definition of a "symbol system" relative to his defined symbol grounding problem. As defined by Harnad, a "symbol system" is "...a set of arbitrary 'physical tokens' scratches on paper, holes on a tape, events in a digital computer, etc. that are ... manipulated on the basis of 'explicit rules' that are ... likewise physical tokens and strings of tokens."{{sfn|Harnad|1990}} |
||
=== Formality of symbols === |
=== Formality of symbols === |
||
As Harnad describes that the symbol grounding problem is exemplified in John R. Searle's Chinese Room argument, the definition of "formal" in relation to formal symbols relative to a formal symbol system may be interpreted from John R. Searle's 1980 article "Minds, brains, and programs |
As Harnad describes that the symbol grounding problem is exemplified in John R. Searle's Chinese Room argument,{{sfn|Harnad|2001a}} the definition of "formal" in relation to formal symbols relative to a formal symbol system may be interpreted from John R. Searle's 1980 article "Minds, brains, and programs", whereby the Chinese Room argument is described in that article: |
||
* "...all that 'formal' means here is that I can identify the symbols entirely by their shapes." |
|||
{{blockquote|[...] all that 'formal' means here is that I can identify the symbols entirely by their shapes.{{sfn|Searle|1980}} }} |
|||
==Assumed meaninglessness of symbols== |
|||
===Relative to the Chinese Room argument=== |
|||
In his 1990 paper, Harnad makes the assumption that symbols are intrinsically meaningless. This is to contrast with a note in Harnad's said 1990 paper that might be interpreted that it is "...the fact that our own symbols do have intrinsic meaning," whereby such is to be interpreted as "...if it is the fact that..." because there is the assumption that Harnad did not have a criterion to use with which to know the intrinsic meaning of any symbol: Otherwise, Harnad's symbol grounding problem would be resolved for some symbol scenario, for which it has not. Furthermore, according to Harnad in his 1990 paper on the symbol grounding problem, the symbol grounding problem is exemplified in Searle's Chinese Room argument: |
|||
* In the Chinese Room argument, the Chinese language that is interpreted by the computer that gives an output of Chinese language responses is postulated to be expressed without a reference point (not the interpretive language program itself) to "understand" the Chinese language being input or output, thus making the meaning of the symbols ''meaningless''. The same is postulated for any human that would take the place of the language-interpreting computer. |
|||
===Relative to the problem of the criterion=== |
|||
According to the January 2021 "[https://www.lesswrong.com/posts/Xs7ag4gsiA6zspmsD/the-problem-of-the-criterion The Problem of the Criterion]" article by Gordon Seidoh Worley on lesswrong.com, relative to the problem of the criterion, the symbol grounding problem is "...essentially the same problem, generalized" and related to metaphysical grounding. |
|||
Even if the interpretive language program of the computer in the Chinese Room argument were inferred (perhaps through use of [[Solomonoff's_theory_of_inductive_inference|Solomonoff induction]] and the scientific method as a metaphorical [[Rosetta stone]]) as a function of an equation in an effort to develop a criterion (to be symbolically representative of an equation) for the accurate interpretation of an input of Chinese language into one or more languages of interest, (unless that criterion could exist independent of a grounding criterion for itself) that criterion itself would need a foundation (perhaps a [[human-in-the-loop]]) for its interpretation to be associated as meaningful, whereby such regressive matter (that is presumed would lead to an infinite regress) is related to the problem of the criterion. |
|||
As may be interpreted, if there is no criterion to use with which to know the intrinsic meaning of a symbol, then no one can know the intrinsic meaning of a symbol. It is presumed that if intrinsic meaning existed in any way as an attribute of a symbol, then a criterion could be had to be used with which to know the intrinsic meaning of that said symbol: Supposedly, that said criterion could be derived from the intrinsic meaning of the symbol itself. Presumptively, then, that said criterion could be generalized in order to deduce the meaning of one or more other similar symbols. However, to date, no known criterion has been discovered to deduce to intrinsic meaning of a symbol. |
|||
Furthermore, no resolve to the problem of the criterion is known to have existed nor exist. This leads to the inference that it might not possible to deduce the intrinsic meaning of any symbol, which leads to the inference that such might be because there is no intrinsic meaning to be had: Because, as may be inferred, no intrinsic meaning to the symbol might exist in the first place; no inherit grounding to the symbol might exist in order for it to be deduced in order for it to be used as a criterion to deduce the intrinsic meaning of the symbol. |
|||
Without a criterion to act as an accurate reference point, or "grounding," for the meaningful interpretation of one or more symbols, then symbols are assumed to be intrinsically meaningless. Furthermore, the existence of a criterion to use with which to derive the said intrinsic meaning of a symbol is presumed to be dependent on there being intrinsic meaning to the said symbol. In any effort to assume a resolve to the problem of the criterion, the assumed grounding for the resolve to the problem of the criterion would be similar to grounding the assumed resolve to the problem of the criterion in "meaningless symbols" because for the foundation would be without known justification, such as how any taking a methodist or particularist resolve to the problem of the criterion may be interpreted in that those assumed resolves are question begging. |
|||
==Background== |
==Background== |
||
===Referents=== |
===Referents=== |
||
A referent is the ''thing'' that a word or phrase refers to as distinguished from the word's meaning.{{sfn|Frege|1952}} This is most clearly illustrated using the proper names of concrete individuals, but it is also true of names of kinds of things and of abstract properties: (1) "Tony Blair", (2) "the prime minister of the UK during the year 2004", and (3) "Cherie Blair's husband" all have the same referent, but not the same meaning. |
|||
Some have suggested that the meaning of a (referring) word is the rule or features that one must use in order to successfully pick out its referent. In that respect, (2) and (3) come closer to wearing their meanings on their sleeves, because they are explicitly stating a rule for picking out their referents: "Find whoever was prime minister of the UK during the year 2004", or "find whoever is Cherie's current husband". But that does not settle the matter, because there's still the problem of the meaning of the components of that rule ("prime minister", "UK", "during", "current", "Cherie", "husband"), and how to pick ''them'' out. |
|||
The phrase "Tony Blair" (or better still, just "Tony") does not have this recursive component problem, because it points straight to its referent, but how? If the meaning is the rule for picking out the referent, what is that rule, when we come down to non-decomposable components like proper names of individuals (or names of ''kinds'', as in "an unmarried man" is a "bachelor")? |
|||
===Referential process=== |
===Referential process=== |
||
In the 19th century, philosopher [[Charles Sanders Peirce]] suggested what some{{who|date=September 2023}} think is a similar model: according to his triadic sign model, meaning requires (1) an interpreter, (2) a sign or representamen, (3) an object, and is (4) the virtual product of an endless regress and progress called [[semiosis]].<ref>Peirce, Charles S. The philosophy of Peirce: selected writings. New York: AMS Press, 1978.</ref> Some {{who|date=September 2023}} have interpreted Peirce as addressing the problem of grounding, feelings, and [[intentionality]] for the understanding of semiotic processes.<ref>Semeiosis and Intentionality T. L. Short Transactions of the Charles S. Peirce Society Vol. 17, No. 3 (Summer, 1981), pp. 197–223</ref> In recent years, Peirce's theory of signs has been rediscovered by an increasing number of artificial intelligence researchers in the context of symbol grounding problem.<ref>[http://ndl.ethernet.edu.et/bitstream/123456789/76632/1/pdf.10#page=270 C.S. Peirce and artificial intelligence: historical heritage and (new) theoretical stakes]; Pierre Steiner; SAPERE – Special Issue on Philosophy and Theory of AI 5:265–276 (2013)</ref> |
|||
Humans are able to infer the intended referents of words, such as "Tony Blair" or "bachelor," but this process need not be explicit. It is probably an unreasonable expectation to know the explicit [[Philosophical Investigations#Rules and rule-following|rule]] for picking out the intended referents{{why|date=October 2016}}. |
|||
So if we take a word's meaning to be the means of picking out its referent, then meanings are in our brains. That is meaning in the ''narrow'' sense. If we use "meaning" in a ''wider'' sense, then we may want to say that meanings include both the referents themselves and the means of picking them out. So if a word (say, "Tony-Blair") is located inside an entity (e.g., oneself) that can use the word and pick out its referent, then the word's wide meaning consists of both the means that that entity uses to pick out its referent, and the referent itself: a wide causal nexus between (1) a head, (2) a word inside it, (3) an object outside it, and (4) whatever "processing" is required in order to successfully connect the inner word to the outer object. |
|||
But what if the "entity" in which a word is located is not a head but a piece of paper (or a computer screen)? What is its meaning then? Surely all the (referring) words on this screen, for example, have meanings, just as they have referents. |
|||
In the 19th century, philosopher [[Charles Sanders Peirce]] suggested what some think is a similar model: according to his triadic sign model, meaning requires (1) an interpreter, (2) a sign or representamen, (3) an object, and is (4) the virtual product of an endless regress and progress called [[semiosis]].<ref>Peirce, Charles S. The philosophy of Peirce: selected writings. New York: AMS Press, 1978.</ref> Some have interpreted Peirce as addressing the problem of grounding, feelings, and [[intentionality]] for the understanding of semiotic processes.<ref>Semeiosis and Intentionality T. L. Short Transactions of the Charles S. Peirce Society Vol. 17, No. 3 (Summer, 1981), pp. 197–223</ref> In recent years, Peirce's theory of signs has been rediscovered by an increasing number of artificial intelligence researchers in the context of symbol grounding problem.<ref>[http://ndl.ethernet.edu.et/bitstream/123456789/76632/1/pdf.10#page=270 C.S. Peirce and artificial intelligence: historical heritage and (new) theoretical stakes]; Pierre Steiner; SAPERE – Special Issue on Philosophy and Theory of AI 5:265–276 (2013)</ref> |
|||
=== Grounding process === |
=== Grounding process === |
||
{{expand section|date=October 2016}} |
|||
There would be no connection at all between written symbols and any intended referents if there were no minds mediating those intentions, via their own internal means of picking out those intended referents. So the meaning of a word on a page is "ungrounded." Nor would looking it up in a dictionary help: If one tried to look up the meaning of a word one did not understand in a [[dictionary]] of a language one did not already understand, one would just cycle endlessly from one meaningless definition to another. One's search for meaning would be ungrounded. In contrast, the meaning of the words in one's head—those words one ''does'' understand—are "grounded".{{citation needed|date=October 2016}} That mental grounding of the meanings of words mediates between the words on any external page one reads (and understands) and the external objects to which those words refer.<ref>This is the [[causal theory of reference|causal]], [[contextual theory of reference]] that [[C. K. Ogden|Ogden]] & [[I. A. Richards|Richards]] packed in ''[[The Meaning of Meaning]]'' (1923).</ref><ref>Cf. [[semantic externalism]] as claimed in "The Meaning of 'Meaning'" of ''Mind, Language and Reality'' (1975) by [[Hilary Putnam|Putnam]] who argues: "Meanings just ain't in the head." Now he and [[Michael Dummett|Dummett]] seem to favor [[anti-realism]] in favor of [[intuitionism]], [[psychologism]], [[Constructivist epistemology|constructivism]] and [[contextualism]].</ref> |
There would be no connection at all between written symbols and any intended referents if there were no minds mediating those intentions, via their own internal means of picking out those intended referents. So the meaning of a word on a page is "ungrounded." Nor would looking it up in a dictionary help: If one tried to look up the meaning of a word one did not understand in a [[dictionary]] of a language one did not already understand, one would just cycle endlessly from one meaningless definition to another. One's search for meaning would be ungrounded. In contrast, the meaning of the words in one's head—those words one ''does'' understand—are "grounded".{{citation needed|date=October 2016}} That mental grounding of the meanings of words mediates between the words on any external page one reads (and understands) and the external objects to which those words refer.<ref>This is the [[causal theory of reference|causal]], [[contextual theory of reference]] that [[C. K. Ogden|Ogden]] & [[I. A. Richards|Richards]] packed in ''[[The Meaning of Meaning]]'' (1923).</ref><ref>Cf. [[semantic externalism]] as claimed in "The Meaning of 'Meaning'" of ''Mind, Language and Reality'' (1975) by [[Hilary Putnam|Putnam]] who argues: "Meanings just ain't in the head." Now he and [[Michael Dummett|Dummett]] seem to favor [[anti-realism]] in favor of [[intuitionism]], [[psychologism]], [[Constructivist epistemology|constructivism]] and [[contextualism]].</ref> |
||
===Requirements for symbol grounding=== |
===Requirements for symbol grounding=== |
||
Another symbol system is [[natural language]] |
Another symbol system is [[natural language]].{{sfn|Fodor|1975}} On paper or in a computer, language, too, is just a formal symbol system, manipulable by rules based on the arbitrary shapes of words. But in the brain, meaningless strings of squiggles become meaningful thoughts. [[Stevan Harnad|Harnad]] has suggested two properties that might be required to make this difference:{{Citation needed|date=June 2021}} |
||
#Capacity to pick referents |
#Capacity to pick referents |
||
Line 75: | Line 40: | ||
====Capacity to pick out referents==== |
====Capacity to pick out referents==== |
||
{{multiple issues|section=y| |
|||
{{essay-like|section|date=September 2023}} |
|||
{{More footnotes needed|section|date=March 2013}} |
|||
}} |
|||
One property that static paper or, usually, even a dynamic computer lack that the brain possesses is the capacity to pick out symbols' referents. This is what we were discussing earlier, and it is what the hitherto undefined term "grounding" refers to. A symbol system alone, whether static or dynamic, cannot have this capacity (any more than a book can), because picking out referents is not just a computational (implementation-independent) property; it is a dynamical (implementation-dependent) property. |
One property that static paper or, usually, even a dynamic computer lack that the brain possesses is the capacity to pick out symbols' referents. This is what we were discussing earlier, and it is what the hitherto undefined term "grounding" refers to. A symbol system alone, whether static or dynamic, cannot have this capacity (any more than a book can), because picking out referents is not just a computational (implementation-independent) property; it is a dynamical (implementation-dependent) property. |
||
To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities—the capacity to interact autonomously with that world of objects, events, actions, properties and states that their symbols are systematically interpretable (by us) as referring to. It would have to be able to pick out the referents of its symbols, and its sensorimotor interactions with the world would have to fit coherently with the symbols' interpretations. |
To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities—the capacity to interact autonomously with that world of objects, events, actions, properties and states that their symbols are systematically interpretable (by us) as referring to. It would have to be able to pick out the referents of its symbols, and its sensorimotor interactions with the world would have to fit coherently with the symbols' interpretations. |
||
The symbols, in other words, need to be connected directly to (i.e., grounded in) their referents; the connection must not be dependent only on the connections made by the brains of external interpreters like us. Just the symbol system alone, without this capacity for direct grounding, is not a viable candidate for being whatever it is that is really going on in our brains when we think meaningful thoughts |
The symbols, in other words, need to be connected directly to (i.e., grounded in) their referents; the connection must not be dependent only on the connections made by the brains of external interpreters like us. Just the symbol system alone, without this capacity for direct grounding, is not a viable candidate for being whatever it is that is really going on in our brains when we think meaningful thoughts.{{sfn|Cangelosi|Harnad|2001}} |
||
Meaning as the ability to recognize instances (of objects) or perform actions is specifically treated in the paradigm called "Procedural Semantics", described in a number of papers including "Procedural Semantics" by [[Philip N. |
Meaning as the ability to recognize instances (of objects) or perform actions is specifically treated in the paradigm called "Procedural Semantics", described in a number of papers including "Procedural Semantics" by [[Philip N. Johnson-Laird]]<ref>Philip N. Johnson-Laird "Procedural Semantics" (Cognition, 5 (1977) 189; see http://www.nyu.edu/gsas/dept/philo/courses/mindsandmachines/Papers/procedural.pdf)</ref> and expanded by [[William Aaron Woods|William A. Woods]] in "Meaning and Links".<ref>William A. Woods. "Meaning and Links" (AI Magazine Volume 28 Number 4 (2007); see http://www.aaai.org/ojs/index.php/aimagazine/article/view/2069/2056)</ref> A brief summary in Woods' paper reads: "The idea of procedural semantics is that the semantics of natural language sentences can be characterized in a formalism whose meanings are defined by abstract procedures that a computer (or a person) can either execute or reason about. In this theory the meaning of a noun is a procedure for recognizing or generating instances, the meaning of a proposition is a procedure for determining if it is true or false, and the meaning of an action is the ability to do the action or to tell if it has been done." |
||
====Consciousness==== |
====Consciousness==== |
||
{{Unbalanced section|date=December 2010}} |
|||
The necessity of groundedness, in other words, takes us from the level of the pen-pal [[Turing test]], which is purely symbolic (computational), to the robotic Turing test, which is hybrid symbolic/sensorimotor (Harnad 2000, 2007). Meaning is grounded in the robotic capacity to detect, categorize, identify, and act upon the things that words and sentences refer to (see entries for [[Affordance]] and for [[Categorical perception]]). On the other hand, if the symbols (words and sentences) refer to the very bits of '0' and '1', directly connected to their electronic implementations, which a (any?) computer system can readily manipulate (thus detect, categorize, identify and act upon), then even non-robotic computer systems could be said to be "sensorimotor" and hence able to "ground" symbols in this narrow domain. |
|||
The necessity of groundedness, in other words, takes us from the level of the pen-pal [[Turing test]], which is purely symbolic (computational), to the robotic Turing test, which is hybrid symbolic/sensorimotor.{{sfn|Harnad|2000}}{{sfn|Harnad|2007}} Meaning is grounded in the robotic capacity to detect, categorize, identify, and act upon the things that words and sentences refer to (see entries for [[Affordance]] and for [[Categorical perception]]). On the other hand, if the symbols (words and sentences) refer to the very bits of '0' and '1', directly connected to their electronic implementations, which a (any?) computer system can readily manipulate (thus detect, categorize, identify and act upon), then even non-robotic computer systems could be said to be "sensorimotor" and hence able to "ground" symbols in this narrow domain. |
|||
To categorize is to do the right thing with the right ''kind'' of thing. The categorizer must be able to detect the sensorimotor features of the members of the category that reliably distinguish them from the nonmembers. These feature-detectors must either be inborn or learned. The learning can be based on trial and error induction, guided by feedback from the consequences of correct and incorrect categorization; or, in our own linguistic species, the learning can also be based on verbal descriptions or definitions. The description or definition of a new category, however, can only convey the category and ground its name if the words in the definition are themselves already grounded category names |
To categorize is to do the right thing with the right ''kind'' of thing. The categorizer must be able to detect the sensorimotor features of the members of the category that reliably distinguish them from the nonmembers. These feature-detectors must either be inborn or learned. The learning can be based on trial and error induction, guided by feedback from the consequences of correct and incorrect categorization; or, in our own linguistic species, the learning can also be based on verbal descriptions or definitions. The description or definition of a new category, however, can only convey the category and ground its name if the words in the definition are themselves already grounded category names{{sfn|Blondin-Massé|2008}} According to Harnad, ultimately grounding has to be sensorimotor, to avoid infinite regress.{{sfn|Harnad|2005}} |
||
Harnad thus points at consciousness as a second property. The problem of discovering the causal mechanism for successfully picking out the referent of a category name can in principle be solved by cognitive science. But the problem of explaining how consciousness could play an "independent" role in doing so is probably insoluble, except on pain of [[Psychokinesis|telekinetic]] [[Dualism (philosophy of mind)|dualism]]. Perhaps symbol grounding (i.e., robotic TT capacity) is enough to ensure that conscious meaning is present, but then again, perhaps not. In either case, there is no way we can hope to be any the wiser—and that is Turing's methodological point.{{sfn|Harnad|2001b}}{{sfn|Harnad|2003}} |
|||
But if groundedness is a necessary condition for meaning, is it a sufficient one? Not necessarily, for it is possible that even a robot that could pass the Turing test, "living" amongst the rest of us indistinguishably for a lifetime, would fail to have in its head what Searle has in his: It could be a [[Philosophical zombie|p-zombie]], with no one home, feeling feelings, meaning meanings (Harnad 1995). However, it is possible that different interpreters (including different intelligent species of animals) would have different mechanisms for producing meaning in their systems, thus one cannot require that a system different from a human "experiences" meaning in the same way that a human does, and vice versa. |
|||
Harnad thus points at consciousness as a second property. The problem of discovering the causal mechanism for successfully picking out the referent of a category name can in principle be solved by cognitive science. But the problem of explaining how consciousness could play an "independent" role in doing so is probably insoluble, except on pain of [[Psychokinesis|telekinetic]] [[Dualism (philosophy of mind)|dualism]]. Perhaps symbol grounding (i.e., robotic TT capacity) is enough to ensure that conscious meaning is present, but then again, perhaps not. In either case, there is no way we can hope to be any the wiser—and that is Turing's methodological point (Harnad 2001b, 2003, 2006). |
|||
==Formulation== |
|||
To answer the question of whether or not groundedness is a necessary condition for meaning, a formulation of the symbol grounding problem is required: The symbol grounding problem is the problem of how to make the "...semantic interpretation of a formal symbol system..." "... intrinsic to the system, rather than just parasitic on the meanings in our heads..." "...in anything but other meaningless symbols [Harnad, S. (1990)]." |
|||
==Functionalism== |
|||
There is a school of thought according to which the computer is more like the brain—or rather, the brain is more like the computer. According to this view (called "[[Computational theory of mind|computationalism]]", a variety of [[Functionalism (philosophy of mind)|functionalism]]), the future theory explaining how the brain picks out its referents, (the theory that cognitive neuroscience may eventually arrive at) will be a purely computational one ([[Zenon Pylyshyn|Pylyshyn]] 1984). A computational theory is a theory at the software level. It is essentially a computer algorithm: a set of rules for manipulating symbols. And the algorithm is "implementation-independent." That means that whatever it is that an algorithm is doing, it will do the same thing no matter what hardware it is executed on. The physical details of the [[dynamical system]] implementing the computation are irrelevant to the computation itself, which is purely formal; any hardware that can run the computation will do, and all physical implementations of that particular computer algorithm are equivalent, computationally. |
|||
A computer can execute any computation. Hence once computationalism finds a proper computer algorithm, one that our brain could be running when there is meaning transpiring in our heads, meaning will be transpiring in that computer too, when it implements that algorithm. |
|||
How would we know that we have a proper computer algorithm? It would have to be able to pass the [[Turing test]]. That means it would have to be capable of corresponding with any human being as a pen-pal, for a lifetime, without ever being in any way distinguishable from a real human pen-pal. |
|||
==Searle's Chinese room argument== |
|||
{{main|Chinese room}} |
|||
{{More citations needed|date=September 2019}} |
|||
[[John Searle]] formulated the "[[Chinese room]] argument" in order to disprove computationalism.{{citation needed|date=October 2016}} The Chinese room argument is based on a thought experiment: in it, Searle stated that if the Turing test were conducted in Chinese, then he himself, Searle (who does not understand Chinese), could execute a program that implements the same algorithm that the computer was using without knowing what any of the words he was manipulating meant. |
|||
At first glance, it would seem that if there's no meaning going on inside Searle's head when he is implementing that program, then there's no meaning going on inside the computer when it is the one implementing the algorithm either, computation being implementation-independent. But on a closer look, for a person to execute the same program that a computer would, at very least it would have to have access to a similar bank of memory that the computer has (most likely externally stored). This means that the new computational system that executes the same algorithm is no longer just Searle's original head, but that plus the memory bank (and possibly other devices). |
|||
In particular, this additional memory could store a digital representation of the intended referent of different words (like images, sounds, even video sequences), that the algorithm would use as a model of, and to derive features associated with, the intended referent. The "meaning" then is not to be searched in just Searle's original brain, but in the overall system needed to process the algorithm. Just like when Searle is reading English words, the meaning is not to be located in isolated logical processing areas of the brain, but probably in the overall brain, likely including specific long-term memory areas. Thus, Searle's not perceiving any meaning in his head alone when simulating the work of a computer, does not imply lack of meaning in the overall system, and thus in the actual computer system passing an advanced Turing test. |
|||
=== Implications === |
|||
How does Searle know that there is no meaning going on in his head when he is executing such a Turing-test-passing program? Exactly the same way he knows whether there is or is not meaning going on inside his head under any other conditions: He ''understands'' the words of English, whereas the Chinese symbols that he is manipulating according to the algorithm's rules mean nothing whatsoever to him (and there is no one else in his head for them to mean anything to). However, the complete system that is manipulating those Chinese symbols – which is not just Searle's brain, as explained in the previous section – may have the ability to extract meaning from those symbols, in the sense of being able to use internal (memory) models of the intended referents, pick out the intended referents of those symbols, and generally identifying and using their features appropriately. |
|||
Note that in pointing out that the Chinese words would be meaningless to him under those conditions, Searle has appealed to consciousness. Otherwise one could argue that there ''would'' be meaning going on in Searle's head under those conditions, but that Searle himself would simply not be conscious of it. That is called the [http://plato.stanford.edu/entries/chinese-room/#4.1 "Systems Reply"] to Searle's Chinese Room Argument, and Searle [http://cogprints.org/4023/ rejects] the Systems Reply as being merely a reiteration, in the face of negative evidence, of the very thesis (computationalism) that is on trial in his thought-experiment: "Are words in a running computation like the ungrounded words on a page, meaningless without the mediation of brains, or are they like the grounded words in brains?" |
|||
In this either/or question, the (still undefined) word "ungrounded" has implicitly relied on the difference between inert words on a page and consciously meaningful words in our heads. And Searle is asserting that under these conditions (the Chinese Turing test), the words in his head would not be consciously meaningful, hence they would still be as ungrounded as the inert words on a page. |
|||
So if Searle is right, that (1) both the words on a page and those in any running computer program (including a Turing-test-passing computer program) are meaningless in and of themselves, and hence that (2) whatever it is that the brain is doing to generate meaning can't be just implementation-independent computation, then what ''is'' the brain doing to generate meaning [http://cogprints.org/4023/ (Harnad 2001a)]? |
|||
==Brentano's notion of intentionality== |
|||
{{original research}} |
|||
{{More citations needed|date=September 2019}} |
|||
"[[Intentionality]]" has been called the "mark of the mental" because of some observations by the philosopher [[Franz Brentano|Brentano]] to the effect that mental states always have an inherent, intended (mental) object or content toward which they are "directed": One sees something, wants something, believes something, desires something, understands something, means something etc., and that object is always something that one has ''in mind''. Having a mental object is part of having anything in mind. Hence it is the mark of the mental. There are no "free-floating" mental states that do not also have a mental object. Even hallucinations and imaginings have an object, and even feeling depressed feels like something. Nor is the object the "external" physical object, when there is one. One may see a real chair, but the "intentional" object of one's "intentional state" is the mental chair one has in mind. (Yet another term for intentionality has been "aboutness" or "representationality": thoughts are always ''about'' something; they are (mental) "representations" ''of'' something; but that something is what it is that the thinker has in mind, not whatever external object may or may not correspond to it.) |
|||
If this all sounds like skating over the surface of a problem rather than a real break-through, then the foregoing description has had its intended effect: neither the problem of intentionality is the symbol grounding problem; nor is grounding symbols the solution to the problem of intentionality. The symbols inside an autonomous dynamical symbol system that is able to pass the robotic Turing test are grounded, in that, unlike in the case of an ungrounded symbol system, they do not depend on the mediation of the mind of an external interpreter to connect them to the external objects that they are interpretable (by the interpreter) as being "about"; the connection is autonomous, direct, and unmediated. But ''grounding is not meaning''. Grounding is an input/output performance function. Grounding connects the sensory inputs from external objects to internal symbols and states occurring within an autonomous sensorimotor system, guiding the system's resulting processing and output. |
|||
Meaning, in contrast, is something mental. But to try to put a halt to the name-game of proliferating nonexplanatory synonyms for the mind/body problem without solving it (or, worse, implying that there is more than one mind/body problem), let us cite just one more thing that requires no further explication: ''feeling''. The only thing that distinguishes an internal state that merely has grounding from one that has meaning is that it ''feels like something'' to be in the meaning state, whereas it does not feel like anything to be in the merely grounded functional state. Grounding is a functional matter; feeling is a felt matter. And that is the real source of Brentano's vexed peekaboo relation between "intentionality" and its internal "intentional object". All mental states, in addition to being the functional states of an autonomous dynamical system, are also feeling states. Feelings are not merely "functed," as all other physical states are; feelings are also felt. |
|||
Hence feeling ([[sentience]]) is the real mark of the mental. But the symbol grounding problem is not the same as the mind/body problem, let alone a solution to it. The mind/body problem is actually the feeling/function problem, and symbol-grounding touches only its functional component. This does not detract from the importance of the symbol grounding problem, but just reflects that it is a keystone piece to the bigger puzzle called the mind. |
|||
The neuroscientist [[Antonio Damasio]] investigates this marking function of feelings and emotions in his [[somatic marker hypothesis]]. Damasio adds the notion of biologic [[homeostasis]] to this discussion, presenting it as an automated bodily regulation process providing intentionality to a mind via emotions. Homeostasis is the mechanism that keeps all bodily processes in healthy balance. All of our actions and perceptions will be automatically "evaluated" by our body hardware according to their contribution to homeostasis. This gives us an implicit orientation on how to survive. Such bodily or somatic evaluations can come to our mind in the form of conscious and non-conscious feelings ("gut feelings") and lead our decision-making process. The meaning of a word can be roughly conceptualized as the sum of its associations and their expected contribution to homeostasis, where associations are reconstructions of sensorimotor perceptions that appeared in contiguity with the word. Yet, the somatic marker hypothesis is still hotly debated and critics claim that it has failed to clearly demonstrate how these processes interact at a psychological and evolutionary level. The recurrent question that the somatic marker hypothesis does not address remains: how and why does homeostasis (as in any [[servomechanism]] such as a thermostat and furnace) become ''felt'' homeostasis? |
|||
==See also== |
==See also== |
||
Line 157: | Line 84: | ||
{{div col end}} |
{{div col end}} |
||
== |
==References== |
||
{{Reflist}} |
{{Reflist}} |
||
== |
===Works cited=== |
||
* {{cite book |editor1-first=Tony |editor1-last=Belpaeme |editor2-first=Stephen John |editor2-last=Cowley |editor3-first=Karl F. |editor3-last=MacDorman |title=Symbol Grounding |year=2009 |place=Netherlands |publisher=John Benjamins Publishing Company |isbn=978-9027222510}} |
|||
:'''Note:''' ''This article is based on an entry originally published in Nature/Macmillan Encyclopedia of Cognitive Science that has since been revised by the author and the Wikipedia community.'' |
|||
* {{cite conference |last=Blondin-Massé |first=A. |display-authors=etal |arxiv=0806.3710 |title=How Is Meaning Grounded in Dictionary Definitions? |conference=TextGraphs-3 Workshop, 22nd International Conference on Computational Linguistics, Coling 2008 |place=Manchester |date=18–22 August 2008}} |
|||
* {{cite journal |last1=Cangelosi |first1=A. |last2=Harnad |first2=S. |year=2001 |url=http://cogprints.org/2036/ |title=The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil: Grounding Language in Perceptual Categories |journal=Evolution of Communication |volume=4 |number=1 |pages=117–142|doi=10.1075/eoc.4.1.07can |hdl=10026.1/3619 |s2cid=15837328 |hdl-access=free }} |
|||
* {{cite book |author-link=Jerry Fodor |last=Fodor |first=J. A. |year=1975 |title=The Language of Thought |place=New York |publisher=Thomas Y. Crowell}} |
|||
* {{cite book |last=Frege |first=G. |year=1952 |orig-year=1892 |chapter=On sense and reference |editor1-first=P. |editor1-last=Geach |editor2-first=M. |editor2-last=Black |title=Translations of the Philosophical Writings of Gottlob Frege |place=Oxford |publisher=Blackwell}} |
|||
* {{cite journal |last=Harnad |first=S. |year=1990 |url=http://cogprints.org/3106/ |title=The Symbol Grounding Problem |journal=Physica D |volume=42 |issue=1–3 |pages=335–346|doi=10.1016/0167-2789(90)90087-6 |arxiv=cs/9906002 |bibcode=1990PhyD...42..335H |s2cid=3204300 }} |
|||
* {{cite journal |last=Harnad |first=S. |year=2000 |url=http://cogprints.org/2615/ |title=Minds, Machines and Turing: The Indistinguishability of Indistinguishables |journal=Journal of Logic, Language and Information |volume=9 |number=4 |pages=425–445|doi=10.1023/A:1008315308862 |s2cid=1911720 }} Special Issue on "Alan Turing and Artificial Intelligence" |
|||
* {{cite book |last=Harnad |first=S |year=2001a |chapter-url=http://cogprints.org/4023/ |chapter=Minds, Machines and Searle II: What's Wrong and Right About Searle's Chinese Room Argument? |editor-first1=M. |editor1-last=Bishop |editor2-first=J. |editor2-last=Preston |title=Essays on Searle's Chinese Room Argument |publisher=Oxford University Press}} |
|||
* {{cite journal |last=Harnad |first=S. |year=2001b |url=http://cogprints.org/1624/ |title=No Easy Way Out |journal=The Sciences |volume=41 |number=2 |pages=36–42|doi=10.1002/j.2326-1951.2001.tb03561.x }} |
|||
* {{cite journal |last=Harnad |first=S. |year=2003 |url=http://eprints.ecs.soton.ac.uk/7718/ |title=Can a Machine Be Conscious? How? |journal=Journal of Consciousness Studies |volume=10 |number=4–5 |pages=69–75}} |
|||
* {{cite book |last=Harnad |first=S. |year=2005 |chapter-url=http://eprints.ecs.soton.ac.uk/11725/ |chapter=To Cognize is to Categorize: Cognition is categorization |editor1-last=Lefebvre |editor1-first=C. |editor2-last=Cohen |editor2-first=H. |title=Handbook of Categorization |publisher=Elsevier}} |
|||
* {{cite book |last=Harnad |first=S. |year=2007 |chapter-url=http://eprints.ecs.soton.ac.uk/7741/ |chapter=The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence |editor1-last=Epstein |editor1-first=Robert |editor2-last=Peters |editor2-first=Grace |title=The Turing Test Sourcebook: Philosophical and Methodological Issues in the Quest for the Thinking Computer |publisher=Kluwer}} |
|||
* {{cite journal |last=Searle |first=John R. |year=1980 |url=http://www.class.uh.edu/phil/garson/MindsBrainsandPrograms.pdf |title=Minds, brains, and programs |journal=Behavioral and Brain Sciences |volume=3 |number=3 |pages=417–457|doi=10.1017/S0140525X00005756 |s2cid=55303721 |archive-url=https://web.archive.org/web/20150923204614/http://www.class.uh.edu/phil/garson/MindsBrainsandPrograms.pdf |archive-date=23 September 2015 }} |
|||
==Further reading== |
|||
* Blondin Masse, A, G. Chicoisne, Y. Gargouri, S. Harnad, O. Picard, O. Marcotte (2008) [https://arxiv.org/abs/0806.3710 How Is Meaning Grounded in Dictionary Definitions?] ''TextGraphs-3 Workshop, 22nd International Conference on Computational Linguistics, Coling 2008'', Manchester, 18–22 August 2008 |
|||
* Cangelosi, A. & Harnad, S. (2001) [http://cogprints.org/2036/ The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil: Grounding Language in Perceptual Categories.] ''Evolution of Communication'' 4(1) 117–142. |
|||
* Cangelosi, A.; Greco, A.; Harnad, S. [http://cogprints.org/2132/ From robotic toil to symbolic theft: grounding transfer from entry-level to higher-level categories.] ''Connection Science''12(2) 143–62. |
* Cangelosi, A.; Greco, A.; Harnad, S. [http://cogprints.org/2132/ From robotic toil to symbolic theft: grounding transfer from entry-level to higher-level categories.] ''Connection Science''12(2) 143–62. |
||
* [[Jerry Fodor|Fodor, J. A.]] (1975) ''The language of thought''. New York: Thomas Y. Crowell |
|||
* Frege, G. (1952/1892). On sense and reference. In P. Geach and M. Black, Eds., ''Translations of the Philosophical Writings of Gottlob Frege''. Oxford: Blackwell |
|||
* Harnad, S. (1990) [http://cogprints.org/3106/ The Symbol Grounding Problem.] ''Physica D'' 42: 335–346. |
|||
* Harnad, S. (1992) [http://eprints.ecs.soton.ac.uk/6464/ There Is Only One Mind/Body Problem]. Symposium on the Perception of Intentionality, XXV World Congress of Psychology, Brussels, Belgium, July 1992 ''International Journal of Psychology'' 27: 521 |
|||
* Harnad, S. (1994) [http://cogprints.org/1592/ Computation Is Just Interpretable Symbol Manipulation: Cognition Isn't.] ''Minds and Machines'' 4:379–390 (Special Issue on "What Is Computation") |
|||
* Harnad, S. (1995) [http://eprints.ecs.soton.ac.uk/3347/ Why and How We Are Not Zombies.] ''Journal of Consciousness Studies'' 1: 164–167. |
|||
* Harnad, S. (2000) [http://cogprints.org/2615/ Minds, Machines and Turing: The Indistinguishability of Indistinguishables]. ''Journal of Logic, Language, and Information'' 9(4): 425–445. (Special Issue on "Alan Turing and Artificial Intelligence") |
|||
* Harnad, S. (2001a) [http://cogprints.org/4023/ Minds, Machines and Searle II: What's Wrong and Right About Searle's Chinese Room Argument?] In: M. Bishop & J. Preston (eds.) ''Essays on Searle's Chinese Room Argument''. Oxford University Press. |
|||
* Harnad, S. (2001b) [http://cogprints.org/1624/ No Easy Way Out.] ''The Sciences'' 41(2) 36–42. |
|||
* Harnad, Stevan (2001a) [http://eprints.ecs.soton.ac.uk/5943/ Explaining the Mind: Problems, Problems]. ''The Sciences'' 41: 36–42. |
|||
* Harnad, Stevan (2001b) [http://cogprints.org/2130/ The Mind/Body Problem is the Feeling/Function Problem: Harnad on Dennett on Chalmers]. Technical Report. Department of Electronics and Computer Sciences. University of Southampton. |
|||
* Harnad, S. (2003) [http://eprints.ecs.soton.ac.uk/7718/ Can a Machine Be Conscious? How?.] ''Journal of Consciousness Studies'' 10(4–5): 69–75. |
|||
* Harnad, S. (2005) [http://eprints.ecs.soton.ac.uk/11725/ To Cognize is to Categorize: Cognition is categorization.] in Lefebvre, C. and Cohen, H., Eds. ''Handbook of Categorization''. Elsevier. |
|||
* Harnad, S. (2007) [http://eprints.ecs.soton.ac.uk/7741/ The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence.] In: Epstein, Robert & Peters, Grace (Eds.) ''The Turing Test Sourcebook: Philosophical and Methodological Issues in the Quest for the Thinking Computer''. Kluwer |
|||
* Harnad, S. (2006) [http://eprints.ecs.soton.ac.uk/12092/ Cohabitation: Computation at 70 Cognition at 20.] In Dedrick, D., Eds. ''Essays in Honour of Zenon Pylyshyn''. |
|||
* [http://www.macdorman.com MacDorman, Karl F.] (1999). Grounding symbols through sensorimotor integration. Journal of the Robotics Society of Japan, 17(1), 20–24. [http://www.macdorman.com/kfm/writings/pubs/MacDorman1999GroundingSymbolsSMInteg.pdf Online version] |
* [http://www.macdorman.com MacDorman, Karl F.] (1999). Grounding symbols through sensorimotor integration. Journal of the Robotics Society of Japan, 17(1), 20–24. [http://www.macdorman.com/kfm/writings/pubs/MacDorman1999GroundingSymbolsSMInteg.pdf Online version] |
||
* [http://www.macdorman.com MacDorman, Karl F.] (2007). Life after the symbol system metaphor. Interaction Studies, 8(1), 143–158. [http://www.macdorman.com/kfm/writings/pubs/MacDorman2007SymbolSystemMetaphor.pdf Online version] |
* [http://www.macdorman.com MacDorman, Karl F.] (2007). Life after the symbol system metaphor. Interaction Studies, 8(1), 143–158. [http://www.macdorman.com/kfm/writings/pubs/MacDorman2007SymbolSystemMetaphor.pdf Online version] |
||
* Pylyshyn |
* {{cite book |last=Pylyshyn |first=Z. W. |year=1984 |title=Computation and Cognition |place=Cambridge MA |publisher=MIT/Bradford}} |
||
* Searle, John. R. (1980) [https://web.archive.org/web/20150923204614/http://www.class.uh.edu/phil/garson/MindsBrainsandPrograms.pdf Minds, brains, and programs.] ''Behavioral and Brain Sciences'' 3(3): 417–457 |
|||
* Taddeo, Mariarosaria & [[Luciano Floridi|Floridi, Luciano]] (2005). The symbol grounding problem: A critical review of fifteen years of research. ''[[Journal of Experimental and Theoretical Artificial Intelligence]], 17''(4), 419–445. [http://web.comlab.ox.ac.uk/oucl/research/areas/ieg/research_reports/ieg_rr210605.pdf#search=%22taddeo%20symbol%20grounding%20problem%2 Online version] |
* Taddeo, Mariarosaria & [[Luciano Floridi|Floridi, Luciano]] (2005). The symbol grounding problem: A critical review of fifteen years of research. ''[[Journal of Experimental and Theoretical Artificial Intelligence]], 17''(4), 419–445. [http://web.comlab.ox.ac.uk/oucl/research/areas/ieg/research_reports/ieg_rr210605.pdf#search=%22taddeo%20symbol%20grounding%20problem%2 Online version] |
||
* Turing, A.M. (1950) [http://cogprints.ecs.soton.ac.uk/archive/00000499/ Computing Machinery and Intelligence.] ''Mind'' 49 433–460 [Reprinted in ''Minds and machines''. A. Anderson (ed.), Engelwood Cliffs NJ: Prentice Hall, 1964.] |
* Turing, A. M. (1950) [http://cogprints.ecs.soton.ac.uk/archive/00000499/ Computing Machinery and Intelligence.] ''Mind'' 49 433–460 [Reprinted in ''Minds and machines''. A. Anderson (ed.), Engelwood Cliffs NJ: Prentice Hall, 1964.] |
||
{{DEFAULTSORT:Symbol Grounding}} |
{{DEFAULTSORT:Symbol Grounding}} |
||
[[Category:Cognitive science]] |
[[Category:Cognitive science]] |
||
[[Category:Symbolism]] |
[[Category:Symbolism|Grounding problem]] |
||
[[Category:Arguments in philosophy of mind]] |
[[Category:Arguments in philosophy of mind]] |
||
[[Category:Semantics]] |
[[Category:Semantics]] |
Latest revision as of 09:34, 21 August 2024
A major contributor to this article appears to have a close connection with its subject. (September 2014) |
The symbol grounding problem is a concept in the fields of artificial intelligence, cognitive science, philosophy of mind, and semantics. It addresses the challenge of connecting symbols, such as words or abstract representations, to the real-world objects or concepts they refer to. In essence, it is about how symbols acquire meaning in a way that is tied to the physical world. It is concerned with how it is that words (symbols in general) get their meanings,[1] and hence is closely related to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of how it is that mental states are meaningful, and hence to the problem of consciousness: what is the connection between certain physical systems and the contents of subjective experiences.
Definitions
[edit]The symbol grounding problem
[edit]According to his 1990 paper, Stevan Harnad implicitly expresses a few other definitions of the symbol grounding problem:[2]
- The symbol grounding problem is the problem of how to make the "...semantic interpretation of a formal symbol system..." "... intrinsic to the system, rather than just parasitic on the meanings in our heads..." "...in anything but other meaningless symbols..."
- The symbol grounding problem is the problem of how "...the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes..." can be grounded "...in anything but other meaningless symbols."
- "...the symbol grounding problem is referred to as the problem of intrinsic meaning (or 'intentionality') in Searle's (1980) celebrated 'Chinese Room Argument'"
- The symbol grounding problem is the problem of how you can "...ever get off the symbol/symbol merry-go-round..."
To answer the question of whether or not groundedness is a necessary condition for meaning, a formulation of the symbol grounding problem is required: The symbol grounding problem is the problem of how to make the "...semantic interpretation of a formal symbol system..." "... intrinsic to the system, rather than just parasitic on the meanings in our heads..." "...in anything but other meaningless symbols".[2]
Symbol system
[edit]According to his 1990 paper, Harnad lays out the definition of a "symbol system" relative to his defined symbol grounding problem. As defined by Harnad, a "symbol system" is "...a set of arbitrary 'physical tokens' scratches on paper, holes on a tape, events in a digital computer, etc. that are ... manipulated on the basis of 'explicit rules' that are ... likewise physical tokens and strings of tokens."[2]
Formality of symbols
[edit]As Harnad describes that the symbol grounding problem is exemplified in John R. Searle's Chinese Room argument,[3] the definition of "formal" in relation to formal symbols relative to a formal symbol system may be interpreted from John R. Searle's 1980 article "Minds, brains, and programs", whereby the Chinese Room argument is described in that article:
[...] all that 'formal' means here is that I can identify the symbols entirely by their shapes.[4]
Background
[edit]Referents
[edit]A referent is the thing that a word or phrase refers to as distinguished from the word's meaning.[5] This is most clearly illustrated using the proper names of concrete individuals, but it is also true of names of kinds of things and of abstract properties: (1) "Tony Blair", (2) "the prime minister of the UK during the year 2004", and (3) "Cherie Blair's husband" all have the same referent, but not the same meaning.
Referential process
[edit]In the 19th century, philosopher Charles Sanders Peirce suggested what some[who?] think is a similar model: according to his triadic sign model, meaning requires (1) an interpreter, (2) a sign or representamen, (3) an object, and is (4) the virtual product of an endless regress and progress called semiosis.[6] Some [who?] have interpreted Peirce as addressing the problem of grounding, feelings, and intentionality for the understanding of semiotic processes.[7] In recent years, Peirce's theory of signs has been rediscovered by an increasing number of artificial intelligence researchers in the context of symbol grounding problem.[8]
Grounding process
[edit]There would be no connection at all between written symbols and any intended referents if there were no minds mediating those intentions, via their own internal means of picking out those intended referents. So the meaning of a word on a page is "ungrounded." Nor would looking it up in a dictionary help: If one tried to look up the meaning of a word one did not understand in a dictionary of a language one did not already understand, one would just cycle endlessly from one meaningless definition to another. One's search for meaning would be ungrounded. In contrast, the meaning of the words in one's head—those words one does understand—are "grounded".[citation needed] That mental grounding of the meanings of words mediates between the words on any external page one reads (and understands) and the external objects to which those words refer.[9][10]
Requirements for symbol grounding
[edit]Another symbol system is natural language.[11] On paper or in a computer, language, too, is just a formal symbol system, manipulable by rules based on the arbitrary shapes of words. But in the brain, meaningless strings of squiggles become meaningful thoughts. Harnad has suggested two properties that might be required to make this difference:[citation needed]
- Capacity to pick referents
- Consciousness
Capacity to pick out referents
[edit]This section has multiple issues. Please help improve it or discuss these issues on the talk page. (Learn how and when to remove these messages)
|
One property that static paper or, usually, even a dynamic computer lack that the brain possesses is the capacity to pick out symbols' referents. This is what we were discussing earlier, and it is what the hitherto undefined term "grounding" refers to. A symbol system alone, whether static or dynamic, cannot have this capacity (any more than a book can), because picking out referents is not just a computational (implementation-independent) property; it is a dynamical (implementation-dependent) property.
To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities—the capacity to interact autonomously with that world of objects, events, actions, properties and states that their symbols are systematically interpretable (by us) as referring to. It would have to be able to pick out the referents of its symbols, and its sensorimotor interactions with the world would have to fit coherently with the symbols' interpretations.
The symbols, in other words, need to be connected directly to (i.e., grounded in) their referents; the connection must not be dependent only on the connections made by the brains of external interpreters like us. Just the symbol system alone, without this capacity for direct grounding, is not a viable candidate for being whatever it is that is really going on in our brains when we think meaningful thoughts.[12]
Meaning as the ability to recognize instances (of objects) or perform actions is specifically treated in the paradigm called "Procedural Semantics", described in a number of papers including "Procedural Semantics" by Philip N. Johnson-Laird[13] and expanded by William A. Woods in "Meaning and Links".[14] A brief summary in Woods' paper reads: "The idea of procedural semantics is that the semantics of natural language sentences can be characterized in a formalism whose meanings are defined by abstract procedures that a computer (or a person) can either execute or reason about. In this theory the meaning of a noun is a procedure for recognizing or generating instances, the meaning of a proposition is a procedure for determining if it is true or false, and the meaning of an action is the ability to do the action or to tell if it has been done."
Consciousness
[edit]This section may be unbalanced towards certain viewpoints. (December 2010) |
The necessity of groundedness, in other words, takes us from the level of the pen-pal Turing test, which is purely symbolic (computational), to the robotic Turing test, which is hybrid symbolic/sensorimotor.[15][16] Meaning is grounded in the robotic capacity to detect, categorize, identify, and act upon the things that words and sentences refer to (see entries for Affordance and for Categorical perception). On the other hand, if the symbols (words and sentences) refer to the very bits of '0' and '1', directly connected to their electronic implementations, which a (any?) computer system can readily manipulate (thus detect, categorize, identify and act upon), then even non-robotic computer systems could be said to be "sensorimotor" and hence able to "ground" symbols in this narrow domain.
To categorize is to do the right thing with the right kind of thing. The categorizer must be able to detect the sensorimotor features of the members of the category that reliably distinguish them from the nonmembers. These feature-detectors must either be inborn or learned. The learning can be based on trial and error induction, guided by feedback from the consequences of correct and incorrect categorization; or, in our own linguistic species, the learning can also be based on verbal descriptions or definitions. The description or definition of a new category, however, can only convey the category and ground its name if the words in the definition are themselves already grounded category names[17] According to Harnad, ultimately grounding has to be sensorimotor, to avoid infinite regress.[18]
Harnad thus points at consciousness as a second property. The problem of discovering the causal mechanism for successfully picking out the referent of a category name can in principle be solved by cognitive science. But the problem of explaining how consciousness could play an "independent" role in doing so is probably insoluble, except on pain of telekinetic dualism. Perhaps symbol grounding (i.e., robotic TT capacity) is enough to ensure that conscious meaning is present, but then again, perhaps not. In either case, there is no way we can hope to be any the wiser—and that is Turing's methodological point.[19][20]
See also
[edit]References
[edit]- ^ Vogt, Paul. "Language evolution and robotics: issues on symbol grounding and language acquisition." Artificial cognition systems. IGI Global, 2007. 176–209.
- ^ a b c Harnad 1990.
- ^ Harnad 2001a.
- ^ Searle 1980.
- ^ Frege 1952.
- ^ Peirce, Charles S. The philosophy of Peirce: selected writings. New York: AMS Press, 1978.
- ^ Semeiosis and Intentionality T. L. Short Transactions of the Charles S. Peirce Society Vol. 17, No. 3 (Summer, 1981), pp. 197–223
- ^ C.S. Peirce and artificial intelligence: historical heritage and (new) theoretical stakes; Pierre Steiner; SAPERE – Special Issue on Philosophy and Theory of AI 5:265–276 (2013)
- ^ This is the causal, contextual theory of reference that Ogden & Richards packed in The Meaning of Meaning (1923).
- ^ Cf. semantic externalism as claimed in "The Meaning of 'Meaning'" of Mind, Language and Reality (1975) by Putnam who argues: "Meanings just ain't in the head." Now he and Dummett seem to favor anti-realism in favor of intuitionism, psychologism, constructivism and contextualism.
- ^ Fodor 1975.
- ^ Cangelosi & Harnad 2001.
- ^ Philip N. Johnson-Laird "Procedural Semantics" (Cognition, 5 (1977) 189; see http://www.nyu.edu/gsas/dept/philo/courses/mindsandmachines/Papers/procedural.pdf)
- ^ William A. Woods. "Meaning and Links" (AI Magazine Volume 28 Number 4 (2007); see http://www.aaai.org/ojs/index.php/aimagazine/article/view/2069/2056)
- ^ Harnad 2000.
- ^ Harnad 2007.
- ^ Blondin-Massé 2008.
- ^ Harnad 2005.
- ^ Harnad 2001b.
- ^ Harnad 2003.
Works cited
[edit]- Belpaeme, Tony; Cowley, Stephen John; MacDorman, Karl F., eds. (2009). Symbol Grounding. Netherlands: John Benjamins Publishing Company. ISBN 978-9027222510.
- Blondin-Massé, A.; et al. (18–22 August 2008). How Is Meaning Grounded in Dictionary Definitions?. TextGraphs-3 Workshop, 22nd International Conference on Computational Linguistics, Coling 2008. Manchester. arXiv:0806.3710.
- Cangelosi, A.; Harnad, S. (2001). "The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil: Grounding Language in Perceptual Categories". Evolution of Communication. 4 (1): 117–142. doi:10.1075/eoc.4.1.07can. hdl:10026.1/3619. S2CID 15837328.
- Fodor, J. A. (1975). The Language of Thought. New York: Thomas Y. Crowell.
- Frege, G. (1952) [1892]. "On sense and reference". In Geach, P.; Black, M. (eds.). Translations of the Philosophical Writings of Gottlob Frege. Oxford: Blackwell.
- Harnad, S. (1990). "The Symbol Grounding Problem". Physica D. 42 (1–3): 335–346. arXiv:cs/9906002. Bibcode:1990PhyD...42..335H. doi:10.1016/0167-2789(90)90087-6. S2CID 3204300.
- Harnad, S. (2000). "Minds, Machines and Turing: The Indistinguishability of Indistinguishables". Journal of Logic, Language and Information. 9 (4): 425–445. doi:10.1023/A:1008315308862. S2CID 1911720. Special Issue on "Alan Turing and Artificial Intelligence"
- Harnad, S (2001a). "Minds, Machines and Searle II: What's Wrong and Right About Searle's Chinese Room Argument?". In Bishop, M.; Preston, J. (eds.). Essays on Searle's Chinese Room Argument. Oxford University Press.
- Harnad, S. (2001b). "No Easy Way Out". The Sciences. 41 (2): 36–42. doi:10.1002/j.2326-1951.2001.tb03561.x.
- Harnad, S. (2003). "Can a Machine Be Conscious? How?". Journal of Consciousness Studies. 10 (4–5): 69–75.
- Harnad, S. (2005). "To Cognize is to Categorize: Cognition is categorization". In Lefebvre, C.; Cohen, H. (eds.). Handbook of Categorization. Elsevier.
- Harnad, S. (2007). "The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence". In Epstein, Robert; Peters, Grace (eds.). The Turing Test Sourcebook: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Kluwer.
- Searle, John R. (1980). "Minds, brains, and programs" (PDF). Behavioral and Brain Sciences. 3 (3): 417–457. doi:10.1017/S0140525X00005756. S2CID 55303721. Archived from the original (PDF) on 23 September 2015.
Further reading
[edit]- Cangelosi, A.; Greco, A.; Harnad, S. From robotic toil to symbolic theft: grounding transfer from entry-level to higher-level categories. Connection Science12(2) 143–62.
- MacDorman, Karl F. (1999). Grounding symbols through sensorimotor integration. Journal of the Robotics Society of Japan, 17(1), 20–24. Online version
- MacDorman, Karl F. (2007). Life after the symbol system metaphor. Interaction Studies, 8(1), 143–158. Online version
- Pylyshyn, Z. W. (1984). Computation and Cognition. Cambridge MA: MIT/Bradford.
- Taddeo, Mariarosaria & Floridi, Luciano (2005). The symbol grounding problem: A critical review of fifteen years of research. Journal of Experimental and Theoretical Artificial Intelligence, 17(4), 419–445. Online version
- Turing, A. M. (1950) Computing Machinery and Intelligence. Mind 49 433–460 [Reprinted in Minds and machines. A. Anderson (ed.), Engelwood Cliffs NJ: Prentice Hall, 1964.]