Google DeepMind to Build Intelligent Helper Robots

K. C. Sabreena Basheer 05 Jan, 2024 • 3 min read

Google DeepMind’s robotics team is making significant strides in the field of advanced robotics with the introduction of three groundbreaking AI systems—AutoRT, SARA-RT, and RT-Trajectory. These systems leverage large language models to enhance the development of versatile robots for everyday use. In this article, we delve into the capabilities of each system and their potential impact on the future of robotics.

Also Read: Google and Stanford Develop AI Housemaid

AutoRT: Scaling Robotic Learning for Real-World Applications

AutoRT is a cutting-edge AI training system. It utilizes large foundation models critical for creating robots with a deep understanding of practical human goals. By collecting diverse experiential training data, AutoRT aims to scale robotic learning, preparing robots for real-world scenarios. The system combines Visual Language Models (VLM) and Large Language Models (LLM) with robot control models (RT-1 or RT-2), orchestrating up to 20 robots simultaneously in various environments. In extensive real-world evaluations, AutoRT safely conducted 77,000 robotic trials across 6,650 unique tasks, showcasing its potential for large-scale data collection.

Google DeepMind AutoRT

SARA-RT: Making Robotics Transformers Leaner and Faster

The Self-Adaptive Robust Attention for Robotics Transformers (SARA-RT) system is a breakthrough in improving the efficiency of Robotics Transformer models. By applying an innovative “up-training” fine-tuning method, SARA-RT achieves a 10.6% accuracy boost and a 14% increase in decision-making speed for the best SARA-RT-2 models. This represents the first scalable attention mechanism providing computational improvements without sacrificing quality. SARA-RT’s adaptability to various Transformer models, including Point Cloud Transformers, demonstrates its broad applicability in the robotics industry.

Google DeepMind SARA-RT | robotics

RT-Trajectory: Enhancing Robot Motion Generalization

RT-Trajectory introduces visual contours to robot motion descriptions in training videos, enabling robots to generalize skills to new tasks effectively. By overlaying 2D trajectory sketches on training videos, RT-Trajectory provides low-level visual cues for the model as it learns robot control policies. In tests on 41 unseen tasks, an arm controlled by RT-Trajectory demonstrated a remarkable 63% task success rate. This is double the performance of existing RT models. The system’s versatility allows it to generate trajectories from human demonstrations or hand-drawn sketches. This makes it adaptable to different robot platforms.

Google DeepMind RT-Trajectory | robotics

Shaping the Future of Advanced Robotics

Google DeepMind’s advancements in AutoRT, SARA-RT, and RT-Trajectory mark a cohesive effort toward creating more capable and versatile robots. These systems, when integrated, promise a future where robots seamlessly navigate complex environments, make faster decisions, and adapt skills to novel situations. While still in the research prototype stage, these innovations highlight DeepMind’s progress in overcoming challenges in robotics. Through them, Google is paving the way for robots to integrate seamlessly into our daily lives.

Also Read: DeepMind RoboCat: A Self-Learning Robotic AI Model

Our Say

As we witness the unveiling of Google DeepMind’s latest robotics advancements, it’s clear that we are on the brink of a transformative era in robotics. The integration of large-scale data collection, efficiency improvements, and motion generalization opens doors to a myriad of possibilities for intelligent helper robots. These innovations not only enhance current robotic capabilities but also lay the groundwork for future breakthroughs in the field. The future envisioned by Google DeepMind is one where AI-powered robots become indispensable companions. A future where robots are capable of understanding and executing complex tasks with precision and adaptability.

Frequently Asked Questions

Lorem ipsum dolor sit amet, consectetur adipiscing elit,

Responses From Readers