Xiaomi tested a humanoid robot on an electric vehicle assembly line, where it worked autonomously for three hours installing parts.
Explore how vision-language-action models like Helix, GR00T N1, and RT-1 are enabling robots to understand instructions and act autonomously.
Robots have long excelled at repetitive factory tasks, but they have struggled with the messy ambiguity of everyday human instructions. Microsoft’s new Rho-alpha model is an attempt to close that gap, ...
NVIDIA has unveiled the Isaac GR00T N1, the world’s first open, fully customizable foundation model for generalized humanoid reasoning and skills. This advanced system is designed to address the ...
Nvidia has released DreamDojo, an open-source world model trained on 44,000 hours of human video that lets robots learn ...
Figure founder and CEO Brett Adcock Thursday revealed a new machine learning model for humanoid robots. The news, which arrives two weeks after Adcock announced the Bay Area robotics firm’s decision ...
While large language models (LLMs) have mastered text (and other modalities to some extent), they lack the physical "common sense" to operate in dynamic, real-world environments. This has limited the ...
The company’s vision language model, Cosmos Reason, is designed to help robots make better decisions by evaluating their surroundings. Nvidia has developed a generative AI (genAI) model to help robots ...
Physical AI, where robotics and foundation models come together, is fast becoming a growing space with companies like Nvidia, Google and Meta releasing research and experimenting in melding large ...
Tesla plans to end production of its Model S and Model X vehicles in the spring. The company will convert the factory space to build its Optimus robot. CEO Elon Musk stated the move is part of a shift ...
Some results have been hidden because they may be inaccessible to you
Show inaccessible results