Personal tools

Document Actions


POETICON++ has a very concrete scientific and technological aim:
The development of a computational mechanism for generalisation and creative generation of new behaviours and perceptions in artificial agents through interaction in dynamic real-life environments.

The scientific and technological objectives of the project that will be monitored to achieve the general aim are:

  • To develop a cognitively-plausible computational learning model for language-mediated behaviour generalisation and creativity in interactive cognitive systems
  • To demonstrate, through humanoid robotic experiments, the feasibility and validity of this model in two dynamic, real-life scenarios: behaviour generation through verbal instruction, and visual scene understanding.
  • To advance our understanding of the neuroscientific, cognitive and linguistic phenomena and mechanisms supporting the generalisation and creation of new behaviour through the essential role of language and action-language hierarchical representations.

POETICON++ will develop a number of basic technologies for cognitive artificial agents, which will enable them to generalise their behaviours and cope with uncertaintenty and unexpected situations in real life environments. The technologies will comprise of:

  • A set of embodied, generative language processing tools that will bridge the gap between verbal communication and the sensorimotor space.
  • A set of generative visual object and action analysis tools that will be engaged in a cognitive dialogue with the language tools in the above mentioned tasks.
  • A self-exploration model that will integrate motoric skills, multisensory perception skills (visual and tactile) and verbal labelling of self-acquired sensorimotor experiences for artificial agents.
  • Improved grasping skills for a humanoid via learning and affordances.
  • A word-level articulation-based automatic speech recognition system.

The above technologies will be integrated and tested within two real life environment scenarios/applications:

  1. Language-based human-robot interaction
  2. Visual scene understanding
Terms and Conditions, Privacy Policy

European Commission • Seventh Framework Programme