IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2013
7 Nov 2013 Tokyo (Japan)

Keynotes

 

Confirmed Keynote Speakers

(In alphabetical order)
  • Prof. Angelo Cangelosi, University of Plymouth, UK (Homepage
Prof Angelo Cangelosi

 

Title: Embodied Language Learning with the Humanoid Robot iCub

Abstract:

Growing theoretical and experimental research on action and language processing and on number learning and space representation clearly demonstrates the role of embodiment in cognition. These studies have important implications for the design of communication and linguistic capabilities in cognitive systems and robots, and have led to the new interdisciplinary approach of Cognitive Developmental Robotics. In the European FP7 project “ITALK” (www.italkproject.org) and the Marie Curie ITN “RobotDoC” (www.robotdoc.org) we follow this integrated view of action and language to develop cognitive capabilities in the humanoid robot iCub. During the talk we will present ongoing results from iCub experiments on embodiment biases in early word acquisition studies, word order cues for lexical development and number and space interaction effects. The talk will also introduce the simulation software of the iCub robot, an open source software tool to perform cognitive modeling experiments in simulation

 

  • Prof. Mohamed Chetouani, Institute for Intelligent Systems and Robotics (ISIR), Paris, France (Homepage
Prof. Mohamed Chetouani

 

Title: Learning interpersonal synchrony

Abstract: ---

 

  • Prof. Sinan Kalkan, Middle East Technical University, Turkey (Homepage)

Sinan_Kalkan

Title: Affordances and Word Categories 

Abstract

Learning and conceptualizing word categories in language such as verbs, nouns and adjectives are very important for seamless communication with robots. In this talk, I will talk about how we can link the notion of affordance proposed by Gibson to (i) conceptualize verbs, nouns and adjectives, and (ii) demonstrate how a robot can use them for several important tasks in Robotics. For verbs, I will compare different conceptualization views proposed by Psychologists over the years. Moreover, I will show that there is an important underlying distinction between adjectives and nouns, as supported by recent findings and theories in Psychology, Language and Neuroscience.

 

  • Prof. Kazuhiko Kawamura, Center for Intelligent Systems, Vanderbilt University, Nashville, USA
     (Homepage)
Prof Kawamura Photo
 
Title: Can we design a social robot that will understand others’ intention by observing their movements?

Abstract

We humans usually move our body driven by a prior intention. For example, on a dance floor the body movements during the first few seconds would be enough to tell which dance (e.g. waltz vs. tango) the couple is going to perform.  So it has been argued that motor information might be sufficient to understand to recognize his/her intention. Others are more skeptical. After all, how could we tell by observing a person grasping an apple,  if the person  is going to eat it or give it to you?  In my presentation, I would argue that observing motions are not enough except for limited occasions. If we want to design a social robot, we should look at how humans recognize intentions and design computational modules.

 

  • Prof. Ashutosh Saxena, Department of Computer Science, Cornell University, USA
     (Homepage)
Prof Saxena Photo
 
Title: Object Affordances and Hidden Humans for Co-Robotic Assistance Tasks

Abstract: ---

 

 

 

Online user: 1