Jun 22, 2022

Nvidia’s New AI Model can Convert still Images to 3D Graphics

Posted by in categories: materials, robotics/AI

View insights.

Nvidia has made another attempt to add depth to shallow graphics. After converting 2D images into 3D scenes, models, and videos, the company has turned its focus to editing. The GPU giant today unveiled a new AI method that transforms still photos into 3D objects that creators can modify with ease. Nvidia researchers have developed a new inverse rendering pipeline, Nvidia 3D MoMa that allows users to reconstruct a series of still photos into a 3D computer model of an object, or even a scene. The key benefit of this workflow, compared to more traditional photogrammetry methods, is its ability to output clean 3D models capable of being imported and edited out-of-the-box by 3D gaming and visual engines.

According to reports, other photogrammetry programs will turn 2D images into 3D models, Nvidia’s 3D MoMa technology takes it a step further by producing mesh, material, and lighting information of the subjects and outputting it in a format that’s compatible with existing 3D graphics engines and modeling tools. And it’s all done in a relatively short timeframe, with Nvidia saying 3D MoMa can generate triangle mesh models within an hour using a single Nvidia Tensor Core GPU.

David Luebke, Nvidia’s VP of graphics research, describes the technique with India Today as “a holy grail unifying computer vision and computer graphics.”

Comments are closed.