“The missing piece is AI,” he says.
AI has also shown promise in getting robots to respond to verbal commands, and helping them adapt to the often messy environments in the real world. For example, Google’s RT-2 system combines a vision-language-action model with a robot. This allows the robot to “see” and analyze the world, and respond to verbal instructions to make it move. And a new system called AutoRT from DeepMind uses a similar vision-language model to help robots adapt to unseen environments, and a large language model to come up with instructions for a fleet of robots.
And now for the bad news: even the most cutting-edge robots still cannot do laundry. It’s a chore that is significantly harder for robots than for humans. Crumpled clothes form weird shapes which makes it hard for robots to process and handle.
Comments are closed.