CVPR Daily - 2018 - Thursday

may approach the problem as a human. We take the scene and split it into foreground and background. We also split the person into different parts and move parts separately from one another, which helps us model these complex changes in simpler ways .” Amy adds: “ I think another advantage of this being modular is that it’s easy to debug and it’s easy to understand what each piece is doing, which is something that a lot of modern neural networks are lacking .” Thinking about next steps, Guha says that they use very primitive information for the poses, which are 2D poses , so with more information, like 3D and other annotations, he thinks they could get even more precise types of synthesis. He’d also like to explore video-based synthesis, making better sequences of videos with poses. Amy points out that that one thing we will see at Guha’s oral session is a teaser of what a video generated with this technique will look like. The technique wasn’t designed specifically to work on videos. It was designed using images, but they were pleased to find that it worked very well on videos, so Amy thinks extending that would be a nice future direction. Adrian concludes by saying they are also trying to extend this to unseen poses and people that they haven’t trained on at all. To find out more about this exciting work, come along to the oral [A6] today at 12:50 in the Ballroom and the poster [J7] at 4:30-6:30 in Halls D-E. Thursday 11 Guha, Amy, Adrian

RkJQdWJsaXNoZXIy NTc3NzU=