An internationally developed artificial intelligence (AI) will be able to alter the facial expressions of a person on video so that the movements match his or her voice. The AI is intended to be used in films and video conferences to improve the viewing experience, an article in the University of Bath news page reported.
Similar technologies have been employed to create fake videos using the recorded videos and voices of real people. This AI takes that video editing capability to the next level.
Researchers from the University of Bath (UB) and their peers created Deep Video Portraits, a multinational project to come up with the next generation of AI that can modify visual effects. They presented their proof of concept at the SIGGRAPH conference in August 2018.
Earlier video editing systems can only animate the movements of the interior of the face. The AI, on the other hand, can alter every part of the head that appears in the video.
Deep Video Portraits can change the position of the head, the eyes, and the eyebrows. It can even provide the right static video background for the moments when the head moves. (Related: SCARY: Machine-learning experts build neural network that manipulates facial movements to create fake video footage.)
AI will alter the looks of an actor in a video to match someone else’s movements
“It works by using model-based 3D face performance capture to record the detailed movements of the eyebrows, mouth, nose, and head position of the dubbing actor in a video,” explained one of the Deep Video Portrait developers. “It then transposes these movements onto the ‘target’ actor in the film to accurately sync the lips and facial movements with the new audio.”
The AI borrowed a control system from computer graphics face animation. It is not yet capable of altering faces in real time, so it cannot be used during video conferences and virtual reality sessions. But its creators believe that it can soon be used by the visual entertainment industry.
For example, applying a foreign dub to a film involves a lot of post-production work. They try to match the movement of the mouth of the actor to the voice of the dubber. Despite their efforts, mismatches are the rule rather than the exception
The researchers claimed that their Deep Video Portrait AI can transfer the position of the head, facial expressions, and eye movements of a dubber to the targeted actor. The result is touted to be very realistic.
AI-driven video editing system can be used in movies, VR, and video conferences
UB researcher Christian Richardt served as one of the coauthors of the study. An expert in motion capture technology, he brought up another possible use for the AI: Editing the faces of artists during the post-production sequences.
Richardt cited movies like ‘The Curious Case of Benjamin Button’ that required a lot of computer graphics to be added in after the shoot is over. Post-production takes up a lot of time, money, and effort on the part of numerous trained computer graphic artists.
Deep Video Portrait promised to make post-production much cheaper, easier, and faster. The AI could change the position of the head of the actor, which is always one of the hardest parts to alter, as well as other parts of the face.
A faster version of the AI with real-time editing capability can be used in virtual reality and remote video conferences. The video editing system would adjust the pose and the gaze of the participants to make the virtual conversation more natural and realistic.
Robots.news can tell you more about the AI that is sneakily altering the video clip you’re watching.