Personal tools
Document Actions



POETICON's main objective is:  

The creation of the PRAXICON, an extensible computational resource which associates symbolic representations (words/concepts) with corresponding sensorimotor representations and that is enriched with information on patterns among these representations for forming conceptual structures.

The PRAXICON is envisaged to be a unique computational resource for allowing artificial agents and systems to:

a) tie concepts/words of different levels of abstraction to their sensorimotor instantiations (catering thus for disambiguation), and

b) untie sensorimotor representations from their physical specificities correlating them to conceptual structures of different levels of abstraction (catering thus for intentionality indication).

In other words, going bottom-up in the resource (from sensorimotor representations to concepts) one will get a hierarchical composition of human behaviour, while going top-down (from concepts to sensorimotor representations) one will get intentionally-laden interpretations of those structures.

The development of this resource will rely on the scientific findings and the technological developments that will come out of the following activities:


  • The creation of a corpus of human movements, visual objects and facial expressions, and a corpus of enacted everyday human interaction scenarios, in which sensorimotor and symbolic representations get integrated for meaning formation. The corpus will be used for the development of the computational tools (mentioned below) and the PRAXICON.
  • The computational modelling of the language of action i.e. a structural analysis of motoric representations into primitive units and combination rules for formulating more or less complex actions automatically. Motoric representations for human body movements and facial expressions will be associated with the linguistic entries of the PRAXICON.
  • The computational modelling of the language of vision, i.e. a structural analysis of visual representations into primitive units and combination rules for formulating more or less complex visual objects and scenes. Visual representations of objects (e.g. humans, artefacts, scenes) will be associated with the corresponding linguistic entries in the PRAXICON.
  • The neurophysiological study of action and vision through experiments that will give evidence for/against the existence of a “grammar” of motoric representations, a “grammar” of visual representations and their interrelations. These findings will affect the computational modelling of sensorimotor representations for PRAXICON.
  • Cognitive experiments for principled association of sensorimotor representations with symbolic ones in the PRAXICON; the experiments will reveal:
    1. The association level at which concepts, motoric and visual representations are being integrated in the human mind, i.e., the level at which the “parse trees” (hierarchical structures) of the motoric, visual and linguistic representations meet in forming a PRAXICON entry (concept)
    2. The patterns of association among such representations that formulate concepts at different levels of abstraction; the related experiments will reveal which and how PRAXICON entries (each of which will be represented through both symbolic and sensorimotor representations) collaborate in forming concepts that abstract away from the specifics of sensorimotor information to denote everyday human interaction scenarios.
  • Experimentation on the usefulness and extensibility of the PRAXICON, i.e., an attempt to explore the extend to which PRAXICON could be used in audiovisual data processing for associating visual action and visual object representations with natural language and how the resource could be expanded automatically.
  • Experimentation with an existing humanoid platform for exploring the nature/characteristics of the representations a humanoid needs for everyday interaction. It is expected that the lowest-level details of everyday actions (that depend on the precise morphology of the robot) will require robot-specific representations, while the coarsest-level structure should be shared with representations recovered from human action (such as the overall sequencing of parts of the action). By learning where the boundary lies between the two, and how to manage their interface, we will generate a reusable procedure for applying the work of this project to robotic applications.

Document Actions

Terms and Conditions, Privacy Policy

European Commission • Seventh Framework Programme