Everyone has witnessed the recent explosive growth of the Internet in decades and is familiar with its benefits and shortcomings. According to Nvidia, the Internet is about to change again, morphing into more of a virtual reality that will further revolutionize how we do business and get entertained.
“The next big evolution of the Internet will be fueled by advances in computer graphics, the metaverse, and robotics, all linked by AI,” said Jensen Huang, CEO of Nvidia, during a company-sponsored presentation at the recent SIGGRAPH conference in Vancouver , British Columbia, Canada. The Internet will see 2-D web pages become 3-D pages.
Huang expects future web content to be created using not HTML, but neural graphics, where AI and graphics are intertwined to create objects and patterns that learn from data. This combination is expected to more realistic metaverse graphic elements.
“Neural graphics will provide new opportunities for graphics artists and creators,” added Sanja Fidler, Vice President of AI Research at Nvidia. She envisioned scenarios where characters can apply reinforcement learning to improve motion sequences, making for richer, more complete animation. Fidler also expects neural graphics to make content creation less time-consuming.
Just as the Internet based on its content on HTML, the language used to create neural graphics will revolve around USD, or Universal Scene Description, said Rev Lebaredian, Vice President of Omniverse and Simulation Technology for Nvidia. “USD is evolving along a path similar to HTML,” he said. Lebaredian noted that Nvidia has made enhancements to its algorithms to make USD easier to developers to work with. “We will build Omniverse (Nvidia’s collaboration and simulation platform) as a USD engine and an open toolkit to build USD pipelines.”
Nvidia has enhanced some of its tools to produce more realistic graphics and animation sequences. The company has launched the Avatar Cloud Engine (ACE), which will make it easier to build and customized virtual assistants and digital humans. The company has also revised its Omniverse Audio 2Face app to simplify animation of a 3D character, by matching virtually any voice-over track.
“Creating digital humans is complex,” said Simon Yuen, Nvidia’s Senior Director of Avatar Technology. “Everything must update in a few seconds. The Audio2Face app has added more features to add emotion to an avatar. We can now create more realistic muscle and emotion sequences.”
Spencer Chin is a Senior Editor for Design News covering the electronics beat. He has many years of experience covering developments in components, semiconductors, subsystems, power, and other aspects of electronics from both a business/supply-chain and technology perspective. He can be reached at [email protected]