NVIDIA Releases a Vision-Language-Action AI Model on Autonomous Driving

【#Tech24H】NVIDIA announced on Monday the launch of Alpamayo-R1, an open-source reasoning-based vision-language AI model for autonomous driving research, aimed at building the core technological foundation for “Physical AI”, including robots and autonomous vehicles capable of perceiving and interacting with the real world. This is the industry's first vision-language-action model specifically focused on the autonomous driving domain. Vision-language models can process both text and image information simultaneously, enabling vehicles to "see" their surroundings and make decisions based on perceived content.
Editor:Zhang Liyan









