Language Acquisition and Robotics Group

at the University of Illinois at Urbana-Champaign

Home
Research
Robots
People
Publications
Videos
Press

Teaching Bert: iCub Robot Learns About the World

iCub Videos

All of our latest videos are available on our lab's YouTube account. Here is a small sampling of them that highlights some of our lab's activities.

Learning to solve LEGO maze

Learned Reach

Complex Action Learning Through Language Grounding

Reaching and Grasping Demo

Robot balancing task:

Old Videos

Our older videos focus on demonstrations implemented on our previous generation of robots.

Semantic Based Learning of Syntax:

In these videos, Illy shows its knowledge of two-word sentences. The words that Illy knows are "kitten", "puppy", "can", "stay", "move", and "gone", which it learned using its associative memory - see the "Associative Learning" video and/or Kevin Squire's Ph.D. thesis below. The meanings of these words should be obvious, except for "gone", which to Illy means that the object has moved far away.

Illy has learned about syntax by hearing example two-word sentences (from an experimenter) that describe events in its environment. Illy then used its knowledge of the words to deduce the syntactic information. It is important to note that in these videos, Illy produces two-word sentences that it has never heard before - while it was trained with the single word "puppy", it did not hear the word "puppy" during the syntax training.

In the "Syntax Demo" video, Illy searches for objects to play with. Illy also demonstrates its sound source localization ability by turning toward the experimenter when called. Once the experimenter gets Illy's attention in this manner, Illy is directed to look for a specific object when it hears the object's name. In the "Syntax Test" video, the experimenter places objects in front of Illy for it to play with.

Vision-Based Localization and Map Learning:

In this experiment, our robot Illy demonstrates its ability to acquire the mental map of its environment and use the mental map for localization. During the learning phase, Illy is put in an unknown environment and memorizes its navigational experience while exploring. This experience, which is recorded as sequences of images collected from Illy’s camera, is then consolidated to form a map of the environment using our proposed Learning Nonlinear Manifolds from Time Series (Poster Presentation, 314) algorithm. Once Illy has acquired the mental map, it can accurately locate itself in the environment.

The video shows an episode of the localization experiment. The image on the left is Illy’s visual perception, and figure on the right shows Illy’s mental map. There are nine probabilistic maps shown in a 3x3 grid. The eight maps which form the border of the grid show the conditional probability distribution of Illy’s x-y position given a particular (discretized) direction that it is facing. The map on the center is the final probability distribution of Illy's x-y position with the direction parameter marginalized out. As can been from the video, Illy can accurately infer it position and orientation from the visual input.

Associative Learning:

In this demonstration, our robot Illy is wandering around in her pen. For some time now, we've been teaching her the names of some objects, and in this video, I call to her, and tell her to play with the cat. After she approaches the cat, I tell her the word for cat a few times to help strengthen the association between that word and the object she sees in front of her. After this she plays with the cat briefly, then looks around for her other toys.

Illinois Speech & Language Beckman Institute ECE Illinois University of Illinois at Urbana-Champaign