Call for replies to the IEEE-CIS-AMD Dialogue Initiation on Language and Sensorimotor skill acquisition.
Submission deadline: March 30, 2014
The Fall 2013 issue of the IEEE CIS Newsletter on Autonomous Mental Development is out! This is the biannual newsletter of the computational developmental sciences and developmental robotics community, studying mechanisms of lifelong learning and development in machines and humans. A new dialog has been initiated by Katerina Pastra: Autonomous Acquisition of Sensorimotor Experiences: Any Role for Language?
This new dialog formulates a bold hypothesis: language as a communication system may have evolved as a byproduct of language as a tool for (self-)organizing conceptual structures. Those of you interested in reacting to this dialog initiation are welcome to submit a response (contact email@example.com) by March 30th, 2014. The length of each response must be between 600 and 800 words (including references).
Dialogue initiation full text (pp. 13-14):
Self-exploration of the world starts with the very first body movements, even from within the womb. As the motor system develops, such exploration becomes more complex and more efficient. It becomes also more multisensory, as all perceptual abilities develop radically too. However, some percepts have a special status, a symbolic one; speech, for example, is also there during self-exploration of the world and infants are attentive to it and affected by it, from the very first months of their life (Waxman et al. 2010). Beyond the traditional role of verbal communication for expressing intention and passing on knowledge/information, does language play any other role in such context? Does it affect, facilitate, or enable this exploration of the world? If so, how? Could verbal communication be the epiphenomenon of more basic functions served by language?
Recent years have seen an increasing body of experimental evidence suggesting a tight relation between language, perception and action. Part of this evidence sheds light on the role of the (visuo)motor system in language comprehension. For example, motor circuits of the brain have been shown to contribute to comprehension of phonemes, semantic categories and grammar (Pulvermuller and Fadiga 2010). Motor simulation has been found to be activated during language comprehension (Glenberg 2008). At a computational level, there is a large body of research on automatic action-language association (Pastra and Wilks 2004, Pastra 2008), in both intelligent multimedia systems and robotics. The research addresses the semantic gap problem between low-level processes and high-level analyses; its philosophical manifestation is the symbol grounding problem and the related debate on the need for artificial agents to ground symbols to sensorimotor experience for ‘grasping’ the meaning of the language they analyse or generate (Cangelosi 2010).
However, is such mapping needed only for efficient communication with others? Is it merely a sign of truly knowing the meaning of symbols/words? Is the language-motor system relation merely a one-directional one? What does language contribute to the (visuo)motor system, if anything?
There has been increasingly growing evidence that language contributes significantly to structuring sensorimotor experiences. In particular, it has been shown that in perceptual category formation, infants readily compute correlations between different modalities (Plunket et al. 2008). For instance, they correlate the name/label of an object and its visual appearance. This dual category representation (i.e. linguistic and visual) entails that verbal categories (of concrete concepts) comprise members with perceptual similarity.
Indeed, dual category representation creates expectations when a new object is perceived, or a known label is used. Familiar labels create expectations of the visual appearance of the objects to be applied to, so they allow inferences on the basis of the known label, which has not been shown to be the case when a novel verbal label is used (in the later case inferences are based on appearance only) (Smith et al. 2002). Furthermore, infants generalise familiar labels to object categories according to specific perceptual properties they have and there is universal tendency to do that: from single naming of object instances to generalisation of names of different kinds according to different perceptual properties (Smith et al. 2010). Furthermore, developmental studies have indicated that when verbal labels are applied as a system (e.g. two different labels name different objects) they facilitate object discrimination, which is not the case with non-verbal labels, such as tones, sounds, and emotions (Lupyan et al.2007). This was shown for infants as young as 3 months old (Waxman et al. 2010). So, verbal categories (of concrete concepts) have distinctive perceptual characteristics, which allow one category to be distinctive in its denotation from another.
Actually, verbal labels per se have been shown to impose distinctiveness even in cases when perceptual similarity is inconclusive – as a sole criterion – for categorisation of an object to a familiar category. In experiments with 10 month old infants, the use of verbal labels was shown to have an impact on the categorisation of animal cartoon drawings to the extent that led the participants to override perceptual dissimilarities between objects and treat them as more similar to each other (Plunkett et al. 2008). In such case, language was shown to play a causal role in perceptual category formation during infancy.
So, what does naming (verbal labelling) of sensorimotor experiences enable? Is it just a communication mechanism? Is communication a by-product of an evolutionary basic functionality of language?
Addressing such questions can shed new light on language analysis itself, as well as on the development of cognitive, artificial agents.
Cangelosi, A. (2010) 'Grounding Language in Action and Perception: From Cognitive Agents to Humanoid Robots', Physics of Life Reviews, vol. 7, no. 2, pp. 139-151.
Glenberg A. (2008) ‘Toward the Integration of Bodily States, Language and Action’, in Semin G. and E. Smith (eds), Embodied Grounding: Social, Cognitive and Neuroscientific Approaches, ch. 2, pp. 43-70, Cambridge University Press.
Lupyan, G.; Rakison, D. & McClelland, J. (2007) 'Language is not just for talking: Redundant Labels Facilitate Learning of Novel Categories', Psychological Science, vol. 18, pp. 1077-1083
Pastra K. (2008) ‘PRAXICON: The Development of a Grounding Resource’, In Proceedings of the 4th International Workshop on Human-Computer Conversation, Bellagio, Italy.
Pastra K. and Y. Wilks (2004) ‘Vision-Language Integration in AI: a reality check’, In Proceedings of the 16th European Conference on Artificial Intelligence (ECAI), pp. 937-941, Valencia, Spain.
Plunkett K., Hu J. and Cohen L. (2008) ‘Labels can override perceptual categories in early infancy’ Cognition, vol. 106, pp. 665–681.
Pulvermueller F. and Fadiga L. (2010) 'Active perception: sensorimotor circuits as a cortical basis for language' Nature Reviews in Neuroscience, vol. 11, number 5, pp. 351-360.
Smith, L., Colunga, E. & Yoshida, H. (2002) 'Making an Ontology: Cross-Linguistic Evidence' Early Category and Concept Development, Cambridge University Press, pp. 275-302
Smith, L., Colunga, E. & Yoshida, H. (2010) 'Knowledge as Process: Contextually-Cued Attention and Early Word Learning' Cognitive Science, pp. 1-28
Waxman S. (2010) 'Categorization in 3- and 4-Month-Old Infants: An Advantage of Words over Tones' Child Development, vol. 81, pp. 472-479.