Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
When Bill Dally joined Nvidia’s research laboratory in 2009, he used only a dozen people and focused on tracing the shelves, a rendering technique used in infographic.
This used research laboratory now employs more than 400 people, who have helped to transform Nvidia of a GPU video game startup in the 90s into a $ 4 billion company supplying the boom in artificial intelligence.
From now on, the company’s research laboratory aims to develop the technology necessary to feed robotics and AI. And some of these laboratory works already appear in the products. The company unveiled a New World AI Set modelsLibraries and other infrastructure for robotics developers.
Dally, now a chief scientist of Nvidia, began to consult Nvidia in 2003 while working in Stanford. When he was ready to leave the department of the Department of Computer Science in Stanford a few years later, he planned to take sabbatical leave. Nvidia had a different idea.
David Kirk, who directed the research laboratory at the time, and the CEO of Nvidia, Jensen Huang, thought that a more permanent position in the research laboratory was a better idea. Dally told Techcrunch that the pair had put on a “full journey press” on the reasons for which he should join the Nvidia research laboratory and finally convinced it.
“He ended up being a little perfect for my interests and my talents,” said Dally. “I think everyone is always looking for the place in life where they can make the biggest contribution to the world. And I think for me, it’s definitely Nvidia.”
When Dally resumed the laboratory in 2009, the expansion was above all. The researchers began to work immediately on areas outside the tracing of the shelves, including the design of circuits and the VLSI, or a very large integration, a process which combines millions of transistors on a single chip.
The research laboratory has not stopped developing since then.
Techcrunch event
San Francisco
|
October 27-29, 2025
“We are trying to understand what will make the most positive difference for the company because we are constantly seeing new exciting areas, but some of them, you know, they are doing a great job, but we have trouble saying if (we will have) a very good success,” said Dally.
For a while, it was building better GPUs for artificial intelligence. Nvidia was early for the future AI boom and began to tinker with the IA GPU idea in 2010 – more than a decade before the current IA frenzy.
“We said it was incredible, it will completely change the world,” said Dally. “We have to start to double this and Jensen thought that when I told him that. We have started to specialize our GPUs for this and to develop many software to support it, we engage with researchers from around the world who did it, long before it was clearly relevant. ”
Now, as Nvidia has a dominant track on the GPU AI market, the technological company has started to seek new areas of demand beyond AI data centers. This research led Nvidia to physical AI and robotics.
“I think that ultimately the robots will be a great player in the world and we essentially want to make the brain of all robots,” said Dally. “To do this, we must start, you know, the development of key technologies.”
This is where Sanja Fidler, vice-president of AI Research in Nvidia, enters. Fidler joined the Nvidia research laboratory in 2018. At the time, she was already working on simulation models for robots with a team of MIT students. When she told Huang what they worked on during a reception of the researchers, he was interested.
“I couldn’t resist joining her,” Fidler told Techcrunch in an interview. “It’s just such, you know, it’s just such a good topic and at the same time was also such a big culture. You know, Jensen told me, come work with me, not with us, not for us, you know?”
She joined Nvidia and launched herself at work by creating a research laboratory in Toronto called Omovers, an Nvidia platform, which was focused on the construction of simulations for physical AI.
The first challenge to build these simulated worlds was to find the necessary 3D data, said Fidler. This included the search for the right volume of potential images to use and build the technology necessary to transform these images into 3D interpretations that simulators could use.
“We have invested in this technology called differentiable rendering, which essentially makes the AI modifiable rendering, right?” Said Fider. “You are going to (de) rendered 3D means to the image or video, right? And we want it going in the other direction.”
Omovers has published the first version of its model which transforms images into 3D models, Ganverse3dIn 2021. Then it worked to determine the same process for the video. FIDLER said that he had used robot and autonomous cars to create these 3D models and simulations through his Neurric neuronal reconstruction enginethat the company announced for the first time in 2022.
She added that these technologies were the backbone of the company Family Cosmos of World models AI which were announced in CES in January.
Now the laboratory focuses on creating these models faster. When you play a video game or simulation, you want technology to be able to respond in real time, Fidler said, for the robots they work to make reaction time even faster.
“The robot does not need to look at the world at the same time, in the same way as the world works,” said Fidler. “It can look at it as 100x faster. So if we can make this model much faster than they are today, they will be extremely useful for robotic or physical intermediary applications.”
The company continues to progress on this objective. Nvidia announced a fleet of New World AI Models Designed to create synthetic data that can be used to train robots at the SIGGRAP Infographic Conference on Monday. NVIDIA has also announced new libraries and infrastructure software also targeting robotics developers.
Despite progress – and current media on robots, in particular humanoids – the Nvidia research team remains realistic.
Dally and Fidler said that industry was still at least a few years off to have a humanoid in your home, Flee the comparing him in the media threw and the chronology concerning autonomous vehicles.
“We make huge progress and I think you know that AI has really been the catalyst here,” said Dally. “Starting with the visual AI for the perception of robots, then you know a generative AI, it is extremely precious for the planning and manipulation of tasks and movements. As we solve each of these small individual problems and the amount of data we have to train our networks increases, these robots will develop. ”
(Tagstotranslate) Enterprise
Source link