Toyota Research Institute (TRI) recently unveiled how it is using Generative AI to help robots learn new dexterous behaviors from demonstration. TRI said this new approach “is a step towards building ‘Large Behavior Models (LBMs)’ for robots, analogous to the Large Language Models (LLMs) that have recently revolutionized conversational AI.”
TRI said it has already taught robots more than 60 difficult, dexterous skills using the new approach. Some of these skills include pouring liquids, using tools and manipulating deformable objects. These were all realized, according to TRI, without writing a single line of new code; the only change was supplying the robot with new data. You can view more videos of this approach here.
“The tasks that I’m watching these robots perform are simply amazing – even one year ago, I would not have predicted that we were close to this level of diverse dexterity,” said Russ Tedrake, vice president of robotics research at TRI and the Toyota professor of electrical engineering and computer science, aeronautics and astronautics, and mechanical engineering at MIT. “What is so exciting about this new approach is the rate and reliability with which we can add new skills. Because these skills work directly from camera images and tactile sensing, using only learned representations, they are able to perform well even on tasks that involve deformable objects, cloth, and liquids — all of which have traditionally been extremely difficult for robots.”
At RoboBusiness, which takes place October 18-19 in Santa Clara, Calif., a keynote panel of robotics industry leaders will discuss the applications of Large Language Models (LLMs) and text generation applications to robotics. It will also explore fundamental ways generative AI can be applied to robotics design, model training, simulation, control algorithms and product commercialization.
The panel will include Pras Velagapudi, VP of Innovation at Agility Robotics, Jeff Linnell, CEO and founder of Formant, Ken Goldberg, the William S. Floyd Jr. Distinguished Chair in Engineering at UC Berkeley, Amit Goel, director of product management at NVIDIA, and Ted Larson, CEO of OLogic.
TRI’s robot behavior model learns from haptic demonstrations from a teacher, combined with a language description of the goal. It then uses an AI-based diffusion policy to learn the demonstrated skill. This process allows a new behavior to be deployed autonomously from dozens of demonstrations.
TRI’s approach to robot learning is agnostic to the choice of teleoperation device, and it said it has used a variety of low-cost interfaces such as joysticks. For more dexterous behaviors, it taught via bimanual haptic devices with position-position coupling between the teleoperation device and the robot. Position-position coupling means the input device sends measured pose as commands to the robot and the robot tracks these pose commands using torque-based Operational Space Control. The robot’s pose-tracking error is then converted to a force and sent back to the input device for the teacher to feel. This allows teachers to close the feedback loop with the robot through force and has been critical for many of the most difficult skills we have taught.
When the robot holds a tool with both arms, it creates a closed kinematic chain. For any given configuration of the robot and tool, there is a large range of possible internal forces that are unobservable visually. Certain force configurations, such as pulling the grippers apart, are inherently unstable and make it likely the robot’s grasp will slip. If human demonstrators do not have access to haptic feedback, they won’t be able to sense or teach proper control of force.
So TRI employs its Soft-Bubble sensors on many of its platforms. These sensors consist of an internal camera observing an inflated deformable outer membrane. They go beyond measuring sparse force signals and allow the robot to perceive spatially dense information about contact patterns, geometry, slip, and force.
Making good use of the information from these sensors has historically been a challenge. But TRI said diffusion provides a natural way for robots to use the full richness these visuotactile sensors afford, allowing them to apply them to arbitrary dexterous tasks.
In one test, a human teacher attempted 10 egg-beating demonstrations. With haptic force feedback, the operator succeeded every time. Without this feedback, they failed every time.
Instead of image generation conditioned on natural language, TRI uses diffusion to generate robot actions conditioned on sensor observations and, optionally, natural language. TRI said using diffusion to generate robot behavior provides three benefits over previous approaches:
- 1. Applicability to multi-modal demonstrations. This means human demonstrators can teach behaviors naturally and not worry about confusing the robot.
- 2. Suitability to high-dimensional action spaces. This means it’s possible for the robot to plan forward in time which helps avoid myopic, inconsistent, or erratic behavior.
- 3. Stable and reliable training. This means it’s possible to train robots at scale and have confidence they will work, without laborious hand-tuning or hunting for golden checkpoints.
According to TRI, Diffusion is well suited for high dimensional output spaces. Generating images, for example, requires predicting hundreds of thousands of individual pixels. For robotics, this is a key advantage and allows diffusion-based behavior models to scale to complex robots with multiple limbs. It also gave TRI the ability to predict intended trajectories of actions instead of single timesteps.
TRI said this Diffusion Policy is “embarrassingly simple” to train; new behaviors can be taught without requiring numerous costly and laborious real-world evaluations to hunt for the best-performing checkpoints and hyperparameters. Unlike computer vision or natural language applications, AI-based closed-loop systems can not be accurately evaluated with offline metrics — they must be evaluated in a closed-loop setting which, in robotics, generally requires evaluation on physical hardware.
This means any learning pipeline that requires extensive tuning or hyperparameter optimization becomes impractical due to this bottleneck in real-life evaluation. Because Diffusion Policy works out of the box so consistently, it allowed TRI to bypass this difficulty.
TRI admitted that “when we teach a robot a new skill, it is brittle.” Skills will work well in circumstances that are similar to those used in teaching, but the robot will struggle when they differ. TRI said the most common causes of failure cases we observe are:
- States where no recovery has been demonstrated. This can be the result of demonstrations that are too clean.
- Camera viewpoint or background significant changes.
- Test time manipulands that were not encountered during training.
- Distractor objects, for example, significant clutter that was not present during training.
Part of TRI’s technology stack is Drake, a model-based design for robotics that includes a toolbox and simulation platform. Drake’s degree of realism allows TRI to develop in both simulation and in reality and could help overcome these shortcomings going forward.
TRI’s robots have learned 60 dexterous skills already, with a target of hundreds by the end of 2023 and 1,000 by the end of 2024.
“Existing Large Language Models possess the powerful ability to compose concepts in novel ways and learn from single examples,” TRI said. “In the past year, we’ve seen this enable robots to generalize semantically (for example, pick and place with novel objects). The next big milestone is the creation of equivalently powerful Large Behavior Models that fuse this semantic capability with a high level of physical intelligence and creativity. These models will be critical for general-purpose robots that are able to richly engage with the world around them and spontaneously create new dexterous behaviors when needed.”