Jump to content

Symbol grounding problem: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
sorting out sources
sorting out sources
Line 69: Line 69:
===Requirements for symbol grounding===
===Requirements for symbol grounding===


Another symbol system is [[natural language]] ([[Jerry Fodor|Fodor]] 1975). On paper or in a computer, language, too, is just a formal symbol system, manipulable by rules based on the arbitrary shapes of words. But in the brain, meaningless strings of squiggles become meaningful thoughts. [[Stevan Harnad|Harnad]] has suggested two properties that might be required to make this difference:{{Citation needed|date=June 2021}}
Another symbol system is [[natural language]].{{sfn|Fodor|1975}} On paper or in a computer, language, too, is just a formal symbol system, manipulable by rules based on the arbitrary shapes of words. But in the brain, meaningless strings of squiggles become meaningful thoughts. [[Stevan Harnad|Harnad]] has suggested two properties that might be required to make this difference:{{Citation needed|date=June 2021}}


#Capacity to pick referents
#Capacity to pick referents
Line 150: Line 150:
* {{cite conference |last=Blondin-Massé |first=A. |display-authors=etal |arxiv=0806.3710 |title=How Is Meaning Grounded in Dictionary Definitions? |conference=TextGraphs-3 Workshop, 22nd International Conference on Computational Linguistics, Coling 2008 |place=Manchester |date=18–22 August 2008}}
* {{cite conference |last=Blondin-Massé |first=A. |display-authors=etal |arxiv=0806.3710 |title=How Is Meaning Grounded in Dictionary Definitions? |conference=TextGraphs-3 Workshop, 22nd International Conference on Computational Linguistics, Coling 2008 |place=Manchester |date=18–22 August 2008}}
* {{cite journal |last1=Cangelosi |first1=A. |last2=Harnad |first2=S. |year=2001 |url=http://cogprints.org/2036/ |title=The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil: Grounding Language in Perceptual Categories |journal=Evolution of Communication |volume=4 |number=1 |pages=117–142}}
* {{cite journal |last1=Cangelosi |first1=A. |last2=Harnad |first2=S. |year=2001 |url=http://cogprints.org/2036/ |title=The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil: Grounding Language in Perceptual Categories |journal=Evolution of Communication |volume=4 |number=1 |pages=117–142}}
* {{cite book |author-link=Jerry Fodor |last=Fodor |first=J. A. |year=1975 |title=The Language of Thought |place=New York |publisher=Thomas Y. Crowell}}


==References==
==References==
* [[Jerry Fodor|Fodor, J. A.]] (1975) ''The language of thought''. New York: Thomas Y. Crowell
* Frege, G. (1952/1892). On sense and reference. In P. Geach and M. Black, Eds., ''Translations of the Philosophical Writings of Gottlob Frege''. Oxford: Blackwell
* Frege, G. (1952/1892). On sense and reference. In P. Geach and M. Black, Eds., ''Translations of the Philosophical Writings of Gottlob Frege''. Oxford: Blackwell
* Harnad, S. (1990) [http://cogprints.org/3106/ The Symbol Grounding Problem.] ''Physica D'' 42: 335–346.
* Harnad, S. (1990) [http://cogprints.org/3106/ The Symbol Grounding Problem.] ''Physica D'' 42: 335–346.

Revision as of 17:08, 11 September 2023

The symbol grounding problem is the problem of how "...symbol meaning..." is "...to be grounded in something other than just more meaningless symbols" [Harnad, S. (1990)]. This problem is of significant importance in the realms of philosophy, cognition, and language.

In cognitive science and semantics, the symbol grounding problem is concerned with how it is that words (symbols in general) get their meanings,[1] and hence is closely related to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of how it is that mental states are meaningful, hence to the problem of consciousness: what is the connection between certain physical systems and the contents of subjective experiences.

The symbol grounding problem

According to his 1990 paper, Harnad implicitly expresses a few other definitions of the symbol grounding problem:

  1. The symbol grounding problem is the problem of how to make the "...semantic interpretation of a formal symbol system..." "... intrinsic to the system, rather than just parasitic on the meanings in our heads..." "...in anything but other meaningless symbols..." [Harnad, S. (1990)].
  2. The symbol grounding problem is the problem of how "...the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes..." can be grounded "...in anything but other meaningless symbols."
  3. "...the symbol grounding problem is referred to as the problem of intrinsic meaning (or 'intentionality') in Searle's (1980) celebrated 'Chinese Room Argument' [Harnad, S. (1990)]."
  4. The symbol grounding problem is the problem of how you can "...ever get off the symbol/symbol merry-go-round..."

Symbol system

According to his 1990 paper, Harnad lays out the definition of a "symbol system" relative to his defined symbol grounding problem. As defined by Harnad, a "symbol system" is "...a set of arbitrary 'physical tokens' scratches on paper, holes on a tape, events in a digital computer, etc. that are ... manipulated on the basis of 'explicit rules' that are ... likewise physical tokens and strings of tokens."

Formality of symbols

As Harnad describes that the symbol grounding problem is exemplified in John R. Searle's Chinese Room argument, the definition of "formal" in relation to formal symbols relative to a formal symbol system may be interpreted from John R. Searle's 1980 article "Minds, brains, and programs," whereby the Chinese Room argument is described in that article:

  • "...all that 'formal' means here is that I can identify the symbols entirely by their shapes."

Assumed meaninglessness of symbols

Relative to the Chinese Room argument

In his 1990 paper, Harnad makes the assumption that symbols are intrinsically meaningless. This is to contrast with a note in Harnad's said 1990 paper that might be interpreted that it is "...the fact that our own symbols do have intrinsic meaning," whereby such is to be interpreted as "...if it is the fact that..." because there is the assumption that Harnad did not have a criterion to use with which to know the intrinsic meaning of any symbol: Otherwise, Harnad's symbol grounding problem would be resolved for some symbol scenario, for which it has not. Furthermore, according to Harnad in his 1990 paper on the symbol grounding problem, the symbol grounding problem is exemplified in Searle's Chinese Room argument:

  • In the Chinese Room argument, the Chinese language that is interpreted by the computer that gives an output of Chinese language responses is postulated to be expressed without a reference point (not the interpretive language program itself) to "understand" the Chinese language being input or output, thus making the meaning of the symbols meaningless. The same is postulated for any human that would take the place of the language-interpreting computer.

Relative to the problem of the criterion

According to the January 2021 "The Problem of the Criterion" article by Gordon Seidoh Worley on lesswrong.com, relative to the problem of the criterion, the symbol grounding problem is "...essentially the same problem, generalized" and related to metaphysical grounding.

Even if the interpretive language program of the computer in the Chinese Room argument were inferred (perhaps through use of Solomonoff induction and the scientific method as a metaphorical Rosetta stone) as a function of an equation in an effort to develop a criterion (to be symbolically representative of an equation) for the accurate interpretation of an input of Chinese language into one or more languages of interest, (unless that criterion could exist independent of a grounding criterion for itself) that criterion itself would need a foundation (perhaps a human-in-the-loop) for its interpretation to be associated as meaningful, whereby such regressive matter (that is presumed would lead to an infinite regress) is related to the problem of the criterion.

As may be interpreted, if there is no criterion to use with which to know the intrinsic meaning of a symbol, then no one can know the intrinsic meaning of a symbol. It is presumed that if intrinsic meaning existed in any way as an attribute of a symbol, then a criterion could be had to be used with which to know the intrinsic meaning of that said symbol: Supposedly, that said criterion could be derived from the intrinsic meaning of the symbol itself. Presumptively, then, that said criterion could be generalized in order to deduce the meaning of one or more other similar symbols. However, to date, no known criterion has been discovered to deduce to intrinsic meaning of a symbol.

Furthermore, no resolve to the problem of the criterion is known to have existed nor exist. This leads to the inference that it might not possible to deduce the intrinsic meaning of any symbol, which leads to the inference that such might be because there is no intrinsic meaning to be had: Because, as may be inferred, no intrinsic meaning to the symbol might exist in the first place; no inherit grounding to the symbol might exist in order for it to be deduced in order for it to be used as a criterion to deduce the intrinsic meaning of the symbol.

Without a criterion to act as an accurate reference point, or "grounding," for the meaningful interpretation of one or more symbols, then symbols are assumed to be intrinsically meaningless. Furthermore, the existence of a criterion to use with which to derive the said intrinsic meaning of a symbol is presumed to be dependent on there being intrinsic meaning to the said symbol. In any effort to assume a resolve to the problem of the criterion, the assumed grounding for the resolve to the problem of the criterion would be similar to grounding the assumed resolve to the problem of the criterion in "meaningless symbols" because for the foundation would be without known justification, such as how any taking a methodist or particularist resolve to the problem of the criterion may be interpreted in that those assumed resolves are question begging.

Background

Referents

Gottlob Frege distinguished a referent (the thing that a word refers to) and the word's meaning. This is most clearly illustrated using the proper names of concrete individuals, but it is also true of names of kinds of things and of abstract properties: (1) "Tony Blair", (2) "the prime minister of the UK during the year 2004", and (3) "Cherie Blair's husband" all have the same referent, but not the same meaning.[2]

Some have suggested that the meaning of a (referring) word is the rule or features that one must use in order to successfully pick out its referent. In that respect, (2) and (3) come closer to wearing their meanings on their sleeves, because they are explicitly stating a rule for picking out their referents: "Find whoever was prime minister of the UK during the year 2004", or "find whoever is Cherie's current husband". But that does not settle the matter, because there's still the problem of the meaning of the components of that rule ("prime minister", "UK", "during", "current", "Cherie", "husband"), and how to pick them out.

The phrase "Tony Blair" (or better still, just "Tony") does not have this recursive component problem, because it points straight to its referent, but how? If the meaning is the rule for picking out the referent, what is that rule, when we come down to non-decomposable components like proper names of individuals (or names of kinds, as in "an unmarried man" is a "bachelor")?

Referential process

Humans are able to infer the intended referents of words, such as "Tony Blair" or "bachelor," but this process need not be explicit. It is probably an unreasonable expectation to know the explicit rule for picking out the intended referents[why?].

So if we take a word's meaning to be the means of picking out its referent, then meanings are in our brains. That is meaning in the narrow sense. If we use "meaning" in a wider sense, then we may want to say that meanings include both the referents themselves and the means of picking them out. So if a word (say, "Tony-Blair") is located inside an entity (e.g., oneself) that can use the word and pick out its referent, then the word's wide meaning consists of both the means that that entity uses to pick out its referent, and the referent itself: a wide causal nexus between (1) a head, (2) a word inside it, (3) an object outside it, and (4) whatever "processing" is required in order to successfully connect the inner word to the outer object.

But what if the "entity" in which a word is located is not a head but a piece of paper (or a computer screen)? What is its meaning then? Surely all the (referring) words on this screen, for example, have meanings, just as they have referents.

In the 19th century, philosopher Charles Sanders Peirce suggested what some think is a similar model: according to his triadic sign model, meaning requires (1) an interpreter, (2) a sign or representamen, (3) an object, and is (4) the virtual product of an endless regress and progress called semiosis.[3] Some have interpreted Peirce as addressing the problem of grounding, feelings, and intentionality for the understanding of semiotic processes.[4] In recent years, Peirce's theory of signs has been rediscovered by an increasing number of artificial intelligence researchers in the context of symbol grounding problem.[5]

Grounding process

There would be no connection at all between written symbols and any intended referents if there were no minds mediating those intentions, via their own internal means of picking out those intended referents. So the meaning of a word on a page is "ungrounded." Nor would looking it up in a dictionary help: If one tried to look up the meaning of a word one did not understand in a dictionary of a language one did not already understand, one would just cycle endlessly from one meaningless definition to another. One's search for meaning would be ungrounded. In contrast, the meaning of the words in one's head—those words one does understand—are "grounded".[citation needed] That mental grounding of the meanings of words mediates between the words on any external page one reads (and understands) and the external objects to which those words refer.[6][7]

Requirements for symbol grounding

Another symbol system is natural language.[8] On paper or in a computer, language, too, is just a formal symbol system, manipulable by rules based on the arbitrary shapes of words. But in the brain, meaningless strings of squiggles become meaningful thoughts. Harnad has suggested two properties that might be required to make this difference:[citation needed]

  1. Capacity to pick referents
  2. Consciousness

Capacity to pick out referents

One property that static paper or, usually, even a dynamic computer lack that the brain possesses is the capacity to pick out symbols' referents. This is what we were discussing earlier, and it is what the hitherto undefined term "grounding" refers to. A symbol system alone, whether static or dynamic, cannot have this capacity (any more than a book can), because picking out referents is not just a computational (implementation-independent) property; it is a dynamical (implementation-dependent) property.

To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities—the capacity to interact autonomously with that world of objects, events, actions, properties and states that their symbols are systematically interpretable (by us) as referring to. It would have to be able to pick out the referents of its symbols, and its sensorimotor interactions with the world would have to fit coherently with the symbols' interpretations.

The symbols, in other words, need to be connected directly to (i.e., grounded in) their referents; the connection must not be dependent only on the connections made by the brains of external interpreters like us. Just the symbol system alone, without this capacity for direct grounding, is not a viable candidate for being whatever it is that is really going on in our brains when we think meaningful thoughts.[9]

Meaning as the ability to recognize instances (of objects) or perform actions is specifically treated in the paradigm called "Procedural Semantics", described in a number of papers including "Procedural Semantics" by Philip N. Johnson-Laird[10] and expanded by William A. Woods in "Meaning and Links".[11] A brief summary in Woods' paper reads: "The idea of procedural semantics is that the semantics of natural language sentences can be characterized in a formalism whose meanings are defined by abstract procedures that a computer (or a person) can either execute or reason about. In this theory the meaning of a noun is a procedure for recognizing or generating instances, the meaning of a proposition is a procedure for determining if it is true or false, and the meaning of an action is the ability to do the action or to tell if it has been done."

Consciousness

The necessity of groundedness, in other words, takes us from the level of the pen-pal Turing test, which is purely symbolic (computational), to the robotic Turing test, which is hybrid symbolic/sensorimotor (Harnad 2000, 2007). Meaning is grounded in the robotic capacity to detect, categorize, identify, and act upon the things that words and sentences refer to (see entries for Affordance and for Categorical perception). On the other hand, if the symbols (words and sentences) refer to the very bits of '0' and '1', directly connected to their electronic implementations, which a (any?) computer system can readily manipulate (thus detect, categorize, identify and act upon), then even non-robotic computer systems could be said to be "sensorimotor" and hence able to "ground" symbols in this narrow domain.

To categorize is to do the right thing with the right kind of thing. The categorizer must be able to detect the sensorimotor features of the members of the category that reliably distinguish them from the nonmembers. These feature-detectors must either be inborn or learned. The learning can be based on trial and error induction, guided by feedback from the consequences of correct and incorrect categorization; or, in our own linguistic species, the learning can also be based on verbal descriptions or definitions. The description or definition of a new category, however, can only convey the category and ground its name if the words in the definition are themselves already grounded category names[12] So ultimately grounding has to be sensorimotor, to avoid infinite regress (Harnad 2005).

But if groundedness is a necessary condition for meaning, is it a sufficient one? Not necessarily, for it is possible that even a robot that could pass the Turing test, "living" amongst the rest of us indistinguishably for a lifetime, would fail to have in its head what Searle has in his: It could be a p-zombie, with no one home, feeling feelings, meaning meanings (Harnad 1995). However, it is possible that different interpreters (including different intelligent species of animals) would have different mechanisms for producing meaning in their systems, thus one cannot require that a system different from a human "experiences" meaning in the same way that a human does, and vice versa.

Harnad thus points at consciousness as a second property. The problem of discovering the causal mechanism for successfully picking out the referent of a category name can in principle be solved by cognitive science. But the problem of explaining how consciousness could play an "independent" role in doing so is probably insoluble, except on pain of telekinetic dualism. Perhaps symbol grounding (i.e., robotic TT capacity) is enough to ensure that conscious meaning is present, but then again, perhaps not. In either case, there is no way we can hope to be any the wiser—and that is Turing's methodological point (Harnad 2001b, 2003, 2006).

Formulation

To answer the question of whether or not groundedness is a necessary condition for meaning, a formulation of the symbol grounding problem is required: The symbol grounding problem is the problem of how to make the "...semantic interpretation of a formal symbol system..." "... intrinsic to the system, rather than just parasitic on the meanings in our heads..." "...in anything but other meaningless symbols [Harnad, S. (1990)]."

Functionalism

There is a school of thought according to which the computer is more like the brain—or rather, the brain is more like the computer. According to this view (called "computationalism", a variety of functionalism), the future theory explaining how the brain picks out its referents, (the theory that cognitive neuroscience may eventually arrive at) will be a purely computational one (Pylyshyn 1984). A computational theory is a theory at the software level. It is essentially a computer algorithm: a set of rules for manipulating symbols. And the algorithm is "implementation-independent." That means that whatever it is that an algorithm is doing, it will do the same thing no matter what hardware it is executed on. The physical details of the dynamical system implementing the computation are irrelevant to the computation itself, which is purely formal; any hardware that can run the computation will do, and all physical implementations of that particular computer algorithm are equivalent, computationally.

A computer can execute any computation. Hence once computationalism finds a proper computer algorithm, one that our brain could be running when there is meaning transpiring in our heads, meaning will be transpiring in that computer too, when it implements that algorithm.

How would we know that we have a proper computer algorithm? It would have to be able to pass the Turing test. That means it would have to be capable of corresponding with any human being as a pen-pal, for a lifetime, without ever being in any way distinguishable from a real human pen-pal.

Searle's Chinese room argument

John Searle formulated the "Chinese room argument" in order to disprove computationalism.[citation needed] The Chinese room argument is based on a thought experiment: in it, Searle stated that if the Turing test were conducted in Chinese, then he himself, Searle (who does not understand Chinese), could execute a program that implements the same algorithm that the computer was using without knowing what any of the words he was manipulating meant.

At first glance, it would seem that if there's no meaning going on inside Searle's head when he is implementing that program, then there's no meaning going on inside the computer when it is the one implementing the algorithm either, computation being implementation-independent. But on a closer look, for a person to execute the same program that a computer would, at very least it would have to have access to a similar bank of memory that the computer has (most likely externally stored). This means that the new computational system that executes the same algorithm is no longer just Searle's original head, but that plus the memory bank (and possibly other devices). In particular, this additional memory could store a digital representation of the intended referent of different words (like images, sounds, even video sequences), that the algorithm would use as a model of, and to derive features associated with, the intended referent. The "meaning" then is not to be searched in just Searle's original brain, but in the overall system needed to process the algorithm. Just like when Searle is reading English words, the meaning is not to be located in isolated logical processing areas of the brain, but probably in the overall brain, likely including specific long-term memory areas. Thus, Searle's not perceiving any meaning in his head alone when simulating the work of a computer, does not imply lack of meaning in the overall system, and thus in the actual computer system passing an advanced Turing test.

Implications

How does Searle know that there is no meaning going on in his head when he is executing such a Turing-test-passing program? Exactly the same way he knows whether there is or is not meaning going on inside his head under any other conditions: He understands the words of English, whereas the Chinese symbols that he is manipulating according to the algorithm's rules mean nothing whatsoever to him (and there is no one else in his head for them to mean anything to). However, the complete system that is manipulating those Chinese symbols – which is not just Searle's brain, as explained in the previous section – may have the ability to extract meaning from those symbols, in the sense of being able to use internal (memory) models of the intended referents, pick out the intended referents of those symbols, and generally identifying and using their features appropriately.

Note that in pointing out that the Chinese words would be meaningless to him under those conditions, Searle has appealed to consciousness. Otherwise one could argue that there would be meaning going on in Searle's head under those conditions, but that Searle himself would simply not be conscious of it. That is called the "Systems Reply" to Searle's Chinese Room Argument, and Searle rejects the Systems Reply as being merely a reiteration, in the face of negative evidence, of the very thesis (computationalism) that is on trial in his thought-experiment: "Are words in a running computation like the ungrounded words on a page, meaningless without the mediation of brains, or are they like the grounded words in brains?"

In this either/or question, the (still undefined) word "ungrounded" has implicitly relied on the difference between inert words on a page and consciously meaningful words in our heads. And Searle is asserting that under these conditions (the Chinese Turing test), the words in his head would not be consciously meaningful, hence they would still be as ungrounded as the inert words on a page.

So if Searle is right, that (1) both the words on a page and those in any running computer program (including a Turing-test-passing computer program) are meaningless in and of themselves, and hence that (2) whatever it is that the brain is doing to generate meaning can't be just implementation-independent computation, then what is the brain doing to generate meaning (Harnad 2001a)?

See also

Notes

  1. ^ Vogt, Paul. "Language evolution and robotics: issues on symbol grounding and language acquisition." Artificial cognition systems. IGI Global, 2007. 176–209.
  2. ^ Although this article draws in places upon Frege's view of semantics, it is very anti-Fregean in stance. Frege was a fierce critic of psychological accounts that attempt to explain meaning in terms of mental states.
  3. ^ Peirce, Charles S. The philosophy of Peirce: selected writings. New York: AMS Press, 1978.
  4. ^ Semeiosis and Intentionality T. L. Short Transactions of the Charles S. Peirce Society Vol. 17, No. 3 (Summer, 1981), pp. 197–223
  5. ^ C.S. Peirce and artificial intelligence: historical heritage and (new) theoretical stakes; Pierre Steiner; SAPERE – Special Issue on Philosophy and Theory of AI 5:265–276 (2013)
  6. ^ This is the causal, contextual theory of reference that Ogden & Richards packed in The Meaning of Meaning (1923).
  7. ^ Cf. semantic externalism as claimed in "The Meaning of 'Meaning'" of Mind, Language and Reality (1975) by Putnam who argues: "Meanings just ain't in the head." Now he and Dummett seem to favor anti-realism in favor of intuitionism, psychologism, constructivism and contextualism.
  8. ^ Fodor 1975.
  9. ^ Cangelosi & Harnad 2001.
  10. ^ Philip N. Johnson-Laird "Procedural Semantics" (Cognition, 5 (1977) 189; see http://www.nyu.edu/gsas/dept/philo/courses/mindsandmachines/Papers/procedural.pdf)
  11. ^ William A. Woods. "Meaning and Links" (AI Magazine Volume 28 Number 4 (2007); see http://www.aaai.org/ojs/index.php/aimagazine/article/view/2069/2056)
  12. ^ Blondin-Massé 2008.

Works cited

  • Blondin-Massé, A.; et al. (18–22 August 2008). How Is Meaning Grounded in Dictionary Definitions?. TextGraphs-3 Workshop, 22nd International Conference on Computational Linguistics, Coling 2008. Manchester. arXiv:0806.3710.
  • Cangelosi, A.; Harnad, S. (2001). "The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil: Grounding Language in Perceptual Categories". Evolution of Communication. 4 (1): 117–142.
  • Fodor, J. A. (1975). The Language of Thought. New York: Thomas Y. Crowell.

References

Further reading