How facial motion capture affects facial expression in 3d animation
Introduction
Facial behavioral expressions and features are used within the animation industry to shape the personality of characters, it is used to help enhance the animating process of a character’s facial expressions from an approach to animate 3D faces realistically. Therefore, the audience can better understand the personality of a character from the facial movements of the character.
Animating 3D faces to achieve realism is a challenge, the animator needs to spend more time adjusting the model’s facial movement by manually blocking keyframes one at a time. To tackle this, many modern advanced technologies are introduced to get a more accurate facial expression and streamlining the process of animating the face of a 3d virtual character; facial motion capture is one of those. Facial motion capture is the process of recording the movement of the actor’s face into the digital database through specialized software and the data is used to link up with an existing 3D model with specialized equipment that marks and tracks the actor’s face to animate the face of the character. Essentially transferring the real actor’s expression onto the face of the 3D models which can be then used in 3D animations, games, or films.
There are two output methods for facial motion capture currently; one of which is Marker-based and the other is Markerless facial motion capture, both under the Facial Action Coding system.
The traditional marker-based system uses markers to do the facial motion capture, it involves applying markers on the actor’s face and use the helmet with a camera(s) to record the movement of markers on the actor’s face. Markers are metal spots that are stuck on the actor’s face, thus marking the face. The facial movement data source will create high or low-resolution meshes, it can then be used to target the markers of the actor’s face; after combining the data and source, this can be used on the specially rigged model in different animation software to polish and adjust.
Markerless facial motion capture is the method of capturing facial movement without the markers, markerless facial motion capture technology makes use of the facial features of the human face, such as eyes, lip’s corners, and nostrils sometimes with the embossment data from the contrast to map the human face. Using the dynamic 3D scanner, to finish the surface tracking. The scanned data provided is then used to train the internal 3D model within the specialized software, once the internal 3D model is trained appropriately, it is ready to link newly captured data on a rigged model to match the actor’s facial expression.
Nowadays, the facial expression of virtual characters in film, games, and animations are highly improved, they become more accurate and realistic, partly due to the advancement in facial motion capture technology, the data from the capturing the facial movement of the actor from using markers is getting more and more accurate, the resolution is so high, it is even possible that micro facial expressions can be recorded.
This article will continue on to discuss the workflows of well-known films that use facial motion capture in their production pipeline and comparing the markerless motion capture against the mark-based motion capture. The facial motion capture techniques could improve in the future and explore the ways I can use facial motion capture as a base in my future projects.
Findings
The film “Curious Case of Benjamin Button (2008)” won the Academy Awards of Best Achievement in Visual Effects, this film used facial motion capture to record the actor Brad Pitt’s facial movement onto the virtual film character Benjamin Button’s head. The actor Brad Pitt who played Benjamin Button is younger than the virtual character by nearly 20 years old. The visual effect department used their unique method and work process to get the final delicate and accurate render. The next film facial motion capture technology to be discussed is the film “Avengers: Infinity War (2018)”; the villain, Thanos is played by Josh Brolin. This film used advanced facial motion capture techniques to finish the character Thanos’ facial expression, it can represent how facial motion capture technology has improved in recent years.
The facial motion capture workflow of the film “The Curious Case of Benjamin Button(2008)” start from Phosphorescent makeup, Phosphorescent makeup is used to reconstruct the geometry of the facial movement frame by frame, it will create the volumetric data with a high amount of polygonal counts, the next step of the workflow is to scan the maquette, the artist Kazoon Soo-Ji created a photo-real sculpture that is a simulated view of Brad Pitt’s senior version. This maquette as the template is to help the visual effect artist in Digital Domain to create Benjamin’s 3d head model. Base on the Facial Action Coding system (FACS), refers to a set of facial muscle movements that correspond to a displayed emotion [1]. The visual effect artist also creates an oral system and eye system to get a more realistic facial expression. These two systems are set up based on the body condition of the real senior. This is the workflow of the production team for the facial animation in the film “The Curious Case of Benjamin Button” (2008).
The workflow used in the film “Avengers: Infinity War (2018)” begins with the facial motion tracking after the visual effect team gets a time code, the actor Josh Brolin then wore Helmet Cameras Rigs (HMC) filming in stereo at HD resolution at 48 fps and having tracking markers on his faces. The software will then create a low-resolution mesh as a database, the artists in the Digital Domain studio will use software Masquerade to interpolates the standard low-resolution mesh to a high-resolution mesh. Afterward, the mesh would be used digitally by a 3D model by a digital sculpting artist to form the final face of Thanos; this enables mesh to track Josh Brolin’s face movement on the virtual character Thanos’ face.
Discussion
The specialized technology used in the film “The Curious Case of Benjamin Button(2008)” to make the face really is the use of Phosphorescent makeup as a base, Phosphorescent makeup, give them a large amount of polygonal mesh, this mesh is more accurate and subtlety is captured (see Fig 1). The extra oral system and eye system visual effect artist set up helped to promote the final facial motion capture result on getting a realistic finish, they can adjust the teeth and tongue’s movement separately, to link the face muscle movement while the character is speaking. And the eye system helped to get a delicate surrounding influence on the orb, for example, the change of light and shade.
Fig1: Ed Ulbrich, the digital-effects guru from Digital Domain show on Ted-Ed speech.
The specialized technology used in the film “Avengers: Infinity War(2018)” is provided by Digital Domain in their own of the innovated way on how it treated the footage and input data. Masquerade is a Digital Domain company’s unique software to process meshes, create by facial motion capture database, it can be used to improve the low-resolution mesh to a high-resolution mesh. Masquerade has dramatically improved the quality and the subtlety of what Digital Domain is able to capture from an actor.
The FACS (Facial Action Coding system) is an important part of facial motion capture, this system refers to a set of facial muscle movements, Using FACS, we can determine the displayed emotion of the actor. It combines 4 units, they are the main action unit, head movement action unit, eye movement action unit, and emotion action unit. In The film “Avengers: Infinity War (2018)”, Digital Domain still used FACS to capture actor Josh Brolin’s facial movement. Even after 10 years, Digital Domain studio demonstrated 2 different solutions to facial motion capture, they get a more efficient solution in the film “Avengers: Infinity War(2018)”, and super-accurate motion capture, there are lots of useful points that we can learn from.
In addition to Marker-based, Markerless motion capture as another technique way is growing fast, the mobile markerless motion capture software is appearing on the market, they can capture the live facial movement into high-quality geometry using smartphones. The Disney Research Studio found different ways to raise the accuracy of markerless tracking of facial performance capture, they present the method to accurately track the invisible jawline based on the visible skin surface, without the need for any markers or of the actor. The core idea is to learn from a non-linear mapping from the skin deformation to the underlying jaw motion on a dataset where ground-truth jaw poses have been acquired, and then to retarget the mapping to new subjects. [2]
Conclusion
Both marker-based and marker-less facial motion capture have their advantages and disadvantage; Markers will be obscuring expression wrinkles and color changes. Practically, significant time and effort are required to accurately place markers, it brings in a lot of inconvenience to the actor and film crew. Marker-less facial motion capture may not be as accurate as makers facial motion capture, they can easily miss the micro expression on the actor’s face. These two facial capture methods; both impact the 3d animation industry greatly, they enable the animators to avoid spending an extraordinary amount of time manually animating character’s face to follow the facial expression of actor, to use facial motion capture make the life of an animator more efficient. I think markerless motion capture will take the larger market share in the facial motion capture area in the future, they require lower-cost equipment than marker facial motion capture, and the technology of markerless facial motion capture is improving constantly, less time wasted and the process is simpler than before, speeding up the production speed. I am going to study more on the technology of markerless facial motion capture and its applications, adding this to my 3d animation workflow. However, with traditional hand-animated models; the facial motion capture methods are an enhancement rather than a replacement, new options for animators to experiment with.
Bibliography
[1] BRYN FARNSWORTH: Facial Action Coding System (FACS) – A Visual Guidebook
https://imotions.com/blog/facial-action-coding-system/
[2] GASPARD ZOSS, THABO BEELER, MARKUS GROSS, DEREK BRADLEY: Accurate Markerless Jaw Tracking for Facial Performance Capture in ACM Trans. Graph., Vol. 38, No. 4, Article 50. (July 2019)