From Clumsy Metal Arms to Slam-Dunking Robots

Robots just learned how to tie their shoes and dunk basketballs. If you think they won’t take over household chores (or entire industries), think again.
Google DeepMind’s Gemini Robotics is redefining how robots interact with the world by integrating large language models (LLMs) into robotics. Unlike past systems that needed extensive training for each task, this model lets robots understand natural language, adapt to new situations, and generalize across tasks.
Demonstrations showed robots folding glasses, sorting fruit, and even dunking basketballs, all with minimal prior programming. DeepMind trained these systems using simulated and real-world data, bridging the “sim-to-real gap” that has long hindered robotic learning.
- Robots can now interpret and execute commands with LLMs.
- Gemini Robotics improves real-world adaptability using AI-trained reasoning.
- A new AI safety benchmark ensures responsible robotic behavior.
Is this the dawn of truly useful AI-powered robots, or are we still years away from real-world deployment?
Read the full article on MIT Technology Review.
----
💡 We're entering a world where intelligence is synthetic, reality is augmented, and the rules are being rewritten in front of our eyes.
Staying up-to-date in a fast-changing world is vital. That is why I have launched Futurwise; a personalized AI platform that transforms information chaos into strategic clarity. With one click, users can bookmark and summarize any article, report, or video in seconds, tailored to their tone, interests, and language. Visit Futurwise.com to get started for free!
