Nvidia unveils AI tool to aid autonomous driving
- December 8, 2025
- Steve Rogerson

At last week’s NeurIPS conference in California, Nvidia released an AI tool for autonomous driving.
The Drive Alpamayo-R1 is said to be the world’s first open industry-scale reasoning vision language action model for mobility,
At the AI conferences, Nvidia unveiled open physical AI models and tools to support research, including Alpamayo-R1, an industry-scale open reasoning vision language action (VLA) model for autonomous driving.
Alpamayo-R1 (AR1) is an open reasoning VLA model for autonomous vehicle (AV) research. It integrates chain-of-thought AI reasoning with path planning, a component critical for advancing AV safety in complex road scenarios and enabling level-four autonomy.
While previous iterations of self-driving models struggled with nuanced situations – a pedestrian-heavy intersection, an upcoming lane closure or a double-parked vehicle in a bike lane – reasoning gives autonomous vehicles the common sense to drive more like humans.
AR1 accomplishes this by breaking down a scenario and reasoning through each step. It considers all possible trajectories, then uses contextual data to choose the best route.
For example, by tapping into the chain-of-thought reasoning enabled by AR1, an AV driving in a pedestrian-heavy area next to a bike lane could take in data from its path, incorporate reasoning traces – explanations on why it took certain actions – and use that information to plan its future trajectory, such as moving away from the bike lane or stopping for potential jaywalkers.
AR1’s open foundation, based on Nvidia Cosmos Reason (huggingface.co/nvidia/Cosmos-Reason1-7B), lets researchers customise the model for their own non-commercial use cases, whether for benchmarking or building experimental AV applications.
For post-training AR1, reinforcement learning has proven especially effective; researchers observed a significant improvement in reasoning capabilities with AR1 compared with the pretrained model.
Alpamayo-R1 (research.nvidia.com/publication/2025-10_alpamayo-r1) is now available on GitHub and Hugging Face, and a subset of the data used to train and evaluate the model is available in the Nvidia Physical AI Open Datasets. The company has also released the open-source AlpaSim framework (github.com/NVlabs/alpasim) to evaluate AR1.
Developers can learn how to use and post-train Cosmos-based models using step-by-step recipes, quick-start inference examples and post-training workflows available in the Cosmos Cookbook. It’s a comprehensive guide for physical AI developers that covers every step in AI development, including data curation, synthetic data generation and model evaluation.
There are virtually limitless possibilities for Cosmos-based applications. The latest examples from Nvidia include LidarGen (github.com/nv-tlabs/Cosmos-Drive-Dreams/tree/main/cosmos-transfer-lidargen), a model that can generate lidar data for AV simulation, and Omniverse NuRec Fixer, a model for AV and robotics simulation that taps into Cosmos Predict to address artefacts in neurally reconstructed data, such as blurs and holes from novel views or noisy data.


