Hit refresh. The site linked below, created by Philip Wang, a software engineer at Uber, uses research by chip designer Nvidia to create an endless stream of fake portraits. The algorithm behind it is trained on a huge dataset of real images, then uses a type of neural network known as a generative adversarial network (or GAN)to fabricate new examples.
A team from UC Berkeley created a short video and paper displaying the following:
We propose a method to transfer motion between human subjects in different videos. Given two videos – one of a target person whose appearance we wish to synthesize, and the other of a source subject whose motion we wish to impose onto our target person – we transfer motion between these subjects via an end to end pixel-based pipeline.
>>> Everybody Dance Now, paper
Remember that period of time when you would have to explain to someone what the Internet was. Now imagine having to explain that you, yes you, created the Internet. Amazing. In memory of Lawrence Roberts.
What a joke. You spend that kind of money you’d expect it to not to be structurally sound. C’mon!
Researchers at Purdue University have created a new plastic material that can reliably conduct electicity in enviroments up to 220 degrees Celsius (428 F).
Most impressive about this new material isn’t its ability to conduct electricity in extreme temperatures, but that its performance doesn’t seem to change. Usually, the performance of electronics depends on temperature – think about how fast your laptop would work in your climate-controlled office versus the Arizona desert. The performance of these new polymer blend remains stable across a wide temperature range.
Extreme-temperature electronics might be useful for scientists in Antarctica or travelers wandering the Sahara, but they’re also critical to the functioning of cars and planes everywhere. In a moving vehicle, the exhaust is so hot that sensors can’t be too close and fuel consumption must be monitored remotely. If sensors could be directly attached to the exhaust, operators would get a more accurate reading. This is especially important for aircraft, which have hundreds of thousands of sensors.
“A lot of applications are limited by the fact that these plastics will break down at high temperatures, and this could be a way to change that,” said Brett Savoie, a professor of chemical engineering at Purdue. “Solar cells, transistors and sensors all need to tolerate large temperature changes in many applications, so dealing with stability issues at high temperatures is really critical for polymer-based electronics.”
The day robots fly is a day I won’t fly, but at least that day’s not tomorrow.
Will Knight for the MIT Technology Review:
Developed by chipmaker Nvidia, the software won’t just make life easier for software developers. It could also be used to auto-generate virtual environments for virtual reality or for teaching self-driving cars and robots about the world.
“We can create new sketches that have never been seen before and render those,” says Bryan Cantazaro, vice president of applied deep learning at Nvidia. “We’re actually teaching the model how to draw based on real video.”
Nvidia’s researchers used a standard machine learning approach to identify different objects in a video scene: cars, trees, buildings, and so forth. The team then used what’s known as a generative adversarial network, or GAN, to train a computer to fill the outline of objects with realistic 3D imagery.
Software developers can also use this.
Cantazaro says the approach could lower the barrier for game design. Besides rendering whole scenes the approach could be used to add a real person to a video game after feeding on a few minutes of video footage of them in real life. He suggests that the approach could also help render realistic settings for virtual reality, or to provide synthetic training data for autonomous vehicles or robots. “You can’t realistically get real training data for every situation that might pop up,” he says. The work was announced today at NeurIPS, a major AI conference in Montreal, Canada.
The cars portion is the most exciting for me because as autonomous cars become more prevalent, we’ll need better imaging and better ways to move these cars about.
Tristian Greene for The Next Web:
A team of scientists from Cornell University recently published research indicating they’d successfully replicated proprioception in a soft robot. Today, this means they’ve taught a piece of wriggly foam how to understand the position of its body and how external forces (like gravity or Jason Vorhees’ machete) are acting upon it.
The researchers accomplished this by replicating an organic nervous system using a network of fiber optics cables. In theory, this is an approach that could eventually be applied to humanoid robots – perhaps connecting external sensors to the fiber network and transmitting sensation to the machine’s processor – but it’s not quite there yet.
The penalty for ESPN’s failure to adapt has been severe. Disney’s recent earnings revealed that ESPN lost another 2 million regular viewers this year. And while ESPN still has 86 million regular viewers, that’s a 14 million regular viewer dip from the 100 million regular viewers it enjoyed in 2011. Those 14 million lost users generated around $1.44 billion per year for the “worldwide leader in sports,” which is still saddled with the severe costs of set redesigns and sports licensing contracts the company struck while it was busy not seeing the massive locomotive of market change bearing down upon it.
Adapt or die, cord cutting at it’s finest.