Toyota Research Institute reveals major advance in Large Behavior Models for robotics

Toyota Research Institute reveals major advance in Large Behavior Models for robotics

Toyota Research Institute (TRI) has unveiled promising results from a large-scale study on a new class of AI systems called Large Behavior Models (LBMs), which could significantly accelerate the development of general-purpose robots capable of adapting to real-world tasks.

The research, published today, shows that a single LBM can learn hundreds of manipulation tasks and apply that knowledge to new challenges with up to 80 percent less data compared to traditional approaches.

This marks a potential shift in how robots are trained, enabling them to generalize from diverse experiences rather than being hardcoded for specific tasks.

Unlike conventional robotics models, LBMs are trained on a vast and varied dataset that includes nearly 1,700 hours of robot interactions, both simulated and real.

The study evaluated performance across 29 tasks using more than 47,000 simulation rollouts and 1,800 real-world trials – an unusually high bar for empirical rigor in robotics.

LBMs work by translating data from cameras, sensors, and language inputs into sequences of robotic actions. TRI’s implementation uses a diffusion transformer architecture, which can process visual, proprioceptive, and textual data to make real-time decisions.

This allows the model to handle unseen objects and dynamic environments – a major hurdle in current robot deployment.

Gokul N A, TRI founder, says: “Our work with CyRo, our LBM prototype, is not just about picking up objects.

“It’s about building robots that can reason, adapt, and operate in unpredictable environments – the same way people do.”

The research also introduces a new statistical evaluation framework to ensure confidence in results across varying tasks and settings, including blind A/B testing in both simulations and real-world scenarios.

LBMs, as described in TRI’s work, represent an early but critical step toward what the institute calls “universal factories” – modular, flexible production systems powered by adaptive robots.

These systems could reshape how goods are manufactured, making small-scale, sustainable, and personalized production more viable.

While foundation models have already transformed AI in language and image recognition, TRI’s study suggests that similar scaling principles can now be applied to robotics – paving the way for machines that learn like humans, rather than being programmed like tools.

Print Friendly, PDF & Email

Leave a Comment