Nvidia chief reveals AI vision at CES opening
- January 7, 2026
- Steve Rogerson

AI is scaling into every domain and every device, according to Nvidia CEO Jensen Huang in his opening speech to this week’s CES in Las Vegas.
“Computing has been fundamentally reshaped as a result of accelerated computing, as a result of artificial intelligence,” Huang said. “What that means is some $10tn or so of the last decade of computing is now being modernised to this new way of doing computing.”
Huang unveiled Rubin, the firm’s first extreme-codesigned, six-chip AI platform now in full production, and introduced Alpamayo, an open reasoning model family for autonomous vehicle development, part of a push to bring AI into every domain.
Huang also emphasised the role of Nvidia open models across every domain, trained on supercomputers, forming a global ecosystem of intelligence on which developers and enterprises can build.
“Every single six months, a new model is emerging, and these models are getting smarter and smarter,” Huang said. “Because of that, you could see the number of downloads has exploded.”
Introducing the audience to pioneering American astronomer Vera Rubin, after whom Nvidia named its computing platform, Huang announced that the Rubin platform, the successor to the Blackwell architecture and the company’s first extreme-codesigned, six‑chip AI platform, is now in full production.
Nvidia’s open models trained on its supercomputers are powering breakthroughs across healthcare, climate science, robotics, embodied intelligence and autonomous driving.
“Now on top of this platform, Nvidia is a frontier AI model builder, and we build it in a very special way,” he said. “We build it completely in the open so that we can enable every company, every industry, every country, to be part of this AI revolution.”
The portfolio spans six domains – Clara for healthcare, Earth-2 for climate science, Nemotron for reasoning and multimodal AI, Cosmos for robotics and simulation, GR00T for embodied intelligence and Alpamayo for autonomous driving – creating a foundation for innovation across industries.
“These models are open to the world,” Huang said, underscoring Nvidia’s role as a frontier AI builder with world-class models topping leader boards. “You can create the model, evaluate it, guardrail it and deploy it.”
Huang emphasised that AI’s future is not only about supercomputers, it’s personal. He showed a demo featuring a personalised AI agent running locally on the DGX Spark desktop supercomputer and embodied through a Reachy Mini robot using Hugging Face models, showing how open models, model routing and local execution turn agents into responsive, physical collaborators.
“The amazing thing is that is utterly trivial now, but yet, just a couple of years ago, that would have been impossible, absolutely unimaginable,” Huang said.
AI, he said, was now grounded in the physical world, through Nvidia’s technologies for training, inference and edge computing. These systems can be trained on synthetic data in virtual worlds long before interacting with the real world. Huang showcased Nvidia Cosmos open world foundation models trained on videos, robotics data and simulation.
Advancing this story, Huang announced Alpamayo, an open portfolio of reasoning vision language action models, simulation blueprints and datasets enabling level-four capable autonomy.
“Not only does it take sensor input and activates steering wheel, brakes and acceleration, it also reasons about what action it is about to take,” Huang said, teeing up a video showing a vehicle smoothly navigating busy San Francisco traffic.
Huang announced the first passenger car featuring Alpamayo built on Nvidia Drive full-stack autonomous vehicle platform will be on the roads soon in the all‑new Mercedes‑Benz CLA, with AI‑defined driving coming to the USA this year, following the CLA’s recent Euro NCap five‑star safety ratings.
Huang also highlighted growing momentum behind Drive Hyperion, an open, modular, level-four-ready platform adopted by automakers, suppliers and robotaxi providers worldwide.
“Our vision is that, someday, every single car, every single truck will be autonomous, and we’re working towards that future,” Huang said.
Huang was joined on stage by a pair of tiny beeping, booping, hopping robots as he explained how the full‑stack approach is fuelling a global physical AI ecosystem. He rolled a video showing how robots are trained in Nvidia Isaac Sim and Isaac Lab in photorealistic, simulated worlds, before highlighting the work of partners in physical AI across the industry, including Synopsys, Cadence, Boston Dynamics and Franka.
Huang also appeared with Siemens (www.siemens.com) CEO Roland Busch at the company’s keynote to announce an expanded partnership, supported by a montage showing how Nvidia’s full stack integrates with Siemens industrial software, enabling physical AI from design and simulation through production.
“These manufacturing plants are going to be essentially giant robots,” Huang said. “Our job is to create the entire stack so that all of you can create incredible applications for the rest of the world.”
For more on Nvidia, go to www.nvidia.com.


