Everybody Dance Now

A team from UC Berkeley created a short video and paper displaying the following:

We propose a method to transfer motion between human subjects in different videos. Given two videos – one of a target person whose appearance we wish to synthesize, and the other of a source subject whose motion we wish to impose onto our target person – we transfer motion between these subjects via an end to end pixel-based pipeline.

>>> Everybody Dance Now, paper