Will Knight for the MIT Technology Review:
Developed by chipmaker Nvidia, the software won’t just make life easier for software developers. It could also be used to auto-generate virtual environments for virtual reality or for teaching self-driving cars and robots about the world.
“We can create new sketches that have never been seen before and render those,” says Bryan Cantazaro, vice president of applied deep learning at Nvidia. “We’re actually teaching the model how to draw based on real video.”
Nvidia’s researchers used a standard machine learning approach to identify different objects in a video scene: cars, trees, buildings, and so forth. The team then used what’s known as a generative adversarial network, or GAN, to train a computer to fill the outline of objects with realistic 3D imagery.
Software developers can also use this.
Cantazaro says the approach could lower the barrier for game design. Besides rendering whole scenes the approach could be used to add a real person to a video game after feeding on a few minutes of video footage of them in real life. He suggests that the approach could also help render realistic settings for virtual reality, or to provide synthetic training data for autonomous vehicles or robots. “You can’t realistically get real training data for every situation that might pop up,” he says. The work was announced today at NeurIPS, a major AI conference in Montreal, Canada.
The cars portion is the most exciting for me because as autonomous cars become more prevalent, we’ll need better imaging and better ways to move these cars about.