Summarize this content to 2000 words in 6 paragraphs
Everything Announced at Nvidia’s CES Event in 12 Minutes
At CES 2025, Nvidia CEO Jensen Huang kicks off CES, the world’s largest consumer electronics show, with a new RTX gaming chip, updates on its AI chip Grace Blackwell and its future plans to dig deeper into robotics and autonomous cars. Here it is. Our brand new GForce RTX 50 series, Blackwell architecture, the GPU is just a beast, 92 billion transistors, 4000 tops, 4 petaflops of AI, 3 times higher than the last generation Ada, and we need all of it to generate those pixels that I showed you. 380 ray tracing teraflops so that we could for the pixels that we have to compute, compute the most beautiful image you possibly can and of course 125 shader teraflops. There’s actually a concurrent shader teraflops as well as an integer unit of equal performance. So two dual shaders, one is for floating 0.1 is for integer. G7 memory from micron 1.8 terabytes per second, twice the performance of our last generation, and we now have the ability to intermix AI workloads with computer graphics workloads. And one of the amazing things about this generation is the programmable shader is also able to now process neural networks. So the shader is able to carry these neural networks and as a result, we invented. Neuro texture compression and neural material shading with the Blackwell family RTX 5070, 4090 performance at 5:49. Impossible without artificial intelligence, impossible without the four tops, 4 tear ops of AI tensor cores. Impossible without the G7 memories. OK, so 5070, 4090 performance, $549 and here’s the whole family starting from 5070 all the way up to $5090 5090 dollars, twice the performance of a 4090. Starting Of course we’re producing a very large scale availability starting January. Well, it is incredible, but we managed to put these gigantic performance GPUs into a laptop. This is a 5070 laptop for 1299. This 5070 laptop has a 4090 performance. And so the 5090, the 5090. Will fit into a laptop, a thin laptop. That last laptop was 14, 4.9 millimeters. You got a 5080, 5070 TI and 5070. But what we basically have here is 72 Blackwell GPUs or 144 dies. This one chip here is 1.4 exaflops. The world’s largest supercomputer, fastest supercomputer, only recently. This entire room supercomputer only recently achieved an exaflop plus. This is 1.4 exaflops of AI floating point performance. It has 14 terabytes of memory, but here’s the amazing thing the memory bandwidth is 1.2 petabytes per second. That’s basically, basically the entire. Internet traffic that’s happening right now. The entire world’s internet traffic is being processed across these chips, OK? And we have, um, 10 130 trillion transistors in total, 2,592 CPU cores. Whole bunch of networking and so these I wish I could do this. I don’t think I will so these are the blackwells. These are our ConnectX. Networking chips, these are the MV link and we’re trying to pretend about the the the MV link spine, but that’s not possible, OK. And these are all of the HBM memories, 1214 terabytes of HBM memory. This is what we’re trying to do and this is the miracle, this is the miracle of the black wall system so we fine tune them using our expertise and our capabilities and we turn them into the Llama Nemotron suite of open models. There are small ones that interact in uh very very fast response time extremely small uh they’re uh what we call super llama Nemotron supers they’re basically your mainstream versions of your models or your ultra model, the ultra model could be used uh to be a teacher model for a whole bunch of other models. It could be a reward model evaluator. Uh, a judge for other models to create answers and decide whether it’s a good answer or not, basically give feedback to other models. It could be distilled in a lot of different ways, basically a teacher model, a knowledge distillation, uh, uh, model, very large, very capable, and so all of this is now available online and Via Cosmos, the world’s first. World foundation model. It is trained on 20 million hours of video. The 20 million hours of video focuses on physical dynamic things, so dynamic nature, nature themes themes, uh, humans, uh, walking, uh, hands moving, uh, manipulating things, uh, you know, things that are, uh, fast camera movements. It’s really about teaching the AI, not about generating creative content, but teaching the AI to understand the physical world and from this with this physical AI. There are many downstream things that we could uh do as a result we could do synthetic data generation to train uh models. We could distill it and turn it into effectively to see the beginnings of a robotics model. You could have it generate multiple physically based, physically plausible, uh, scenarios of the future, basically do a Doctor Strange. Um, you could, uh, because, because this model understands the physical world, of course you saw a whole bunch of images generated this model understanding the physical world, it also, uh, could do of course captioning and so it could take videos, caption it incredibly well, and that captioning and the video could be used to train. Large language models. Multimodality large language models and uh so you could use this technology to uh use this foundation model to train robotics robots as well as large language models and so this is the Nvidia cosmos. The platform has an auto regressive model for real-time applications as diffusion model for a very high quality image generation. It’s incredible tokenizer basically learning the vocabulary of uh real world and a data pipeline so that if you would like to take all of this and then train it on your own data, this data pipeline because there’s so much data involved we’ve accelerated everything end to end for you and so this is the world’s first data processing pipeline that’s could accelerated as well as AI accelerated all of this is part of the Cosmos platform and today we’re announcing. That Cosmos is open licensed. It’s open available on GitHub. Well, today we’re announcing that our next generation processor for the car, our next generation computer for the car is called Thor. I have one right here. Hang on a second. OK, this is Thor. This is Thor This is This is a robotics computer. This is a robotics computer takes sensors and just a madness amount of sensor information, process it, you know. Umpteen cameras, high resolution radars, LIDARs, they’re all coming into this chip, and this chip has to process all that sensor, turn them into tokens, put them into a transformer, and predict the next path. And this AV computer is now in full production. Thor is 20 times. The processing capability of our last generation Orin, which is really the standard of autonomous vehicles today. And so this is just really quite, quite incredible. Thor is in full production. This robotics processor, by the way, also goes into a full robot and so it could be an AMR, it could be a a a human or robot, uh, it could be the brain, it could be the, uh, manipulator, uh, this this processor basically is a universal robotics computer. The chat GPT moment. For general robotics is just around the corner. And in fact, all of the enabling technologies that I’ve been talking about is. Going to make it possible for us in the next several years to see very rapid breakthroughs, surprising breakthroughs in in general robotics. Now the reason why general robotics is so important is whereas robots with tracks and wheels require special environments to accommodate them. There are 3 robots. 3 robots in the world that we can make that require no green fields. Brown field adaptation is perfect. If we, if we could possibly build these amazing robots, we could deploy them in exactly the world that we’ve built for ourselves. These 3 robots are one agentic robots and agentic AI because you know they’re information workers so long as they could accommodate uh the computers that we have in our offices, it’s gonna be great. Number 2, self-driving cars, and the reason for that is we spent 100+ years building roads and cities. And then number 3, human or robots. If we have the technology to solve these 3. This will be the largest technology industry the world’s ever seen. This is Nvidia’s latest AI supercomputer. And, and it’s finally called Project Digits right now and if you have a good name for it, uh, reach out to us. Um, uh, this here’s the amazing thing, this is an AI supercomputer. It runs the entire Nvidia AI stack. All of Nvidia software runs on this. DGX cloud runs on this. This sits Well, somewhere and it’s wireless or you know connected to your computer, it’s even a workstation if you like it to be and you could access it you could you could reach it like a like a cloud supercomputer and Nvidia’s AI works on it and um it’s based on a a super secret chip that we’ve been working on called GB 110, the smallest Grace Blackwell that we make, and this is the chip that’s inside. It is it is in production. This top secret chip, uh, we did in collaboration with the CPU, the gray CPU was a, uh, is built for Nvidia in collaboration with MediaTech. Uh, they’re the world’s leading SOC company, and they worked with us to build this CPU, the CPU SOC, and connected with chip to chip and the link to the Blackwell GPU, and, uh, this little, this little thing here is in full production. Uh, we’re expecting this computer to uh be available uh around May time frame.
rewrite this title Everything Announced at Nvidia’s CES Event in 12 Minutes – Video
Keep Reading
Subscribe to Updates
Get the latest creative news from FooBar about art, design and business.
© 2025 Globe Timeline. All Rights Reserved.