Copyright © 2007 by the Association for Computing Machinery, Inc.
Permission to make digital or hard copies of part or all of this work for personal or
classroom use is granted without fee provided that copies are not made or distributed for
commercial advantage and that copies bear this notice and the full citation on the first
page. Copyrights for components of this work owned by others than ACM must be
honored. Abstracting with credit is permitted. To copy otherwise, to republish, to post on
servers, or to redistribute to lists, requires prior specific permission and/or a fee. Request
permissions from Permissions Dept, ACM Inc., fax +1 (212) 869-0481 or e-mail
APGV 2007, Tübingen, Germany, July 26–27, 2007.
© 2007 ACM 978-1-59593-670-7/07/0007 $5.00
Redundancy Reduction in 3D Facial Motion Capture Data for Animation
Daniela I. Wellein ∗†
Cristóbal Curio ‡
Heinrich H. Bülthoff §
Max Planck Institute for Biological Cybernetics, Tübingen, Germany
Figure 1: Example animation for facial motion analysis with com-
plete (left) and selected set of markers (middle). Best marker set in
terms of low error and low number of markers (right).
Research on the perception of dynamic faces often requires real-
time animations with low latency. With an adaptation of principal
feature analysis [Cohen et al. 2002], we can reduce the number of
facial motion capture markers by 50%, while retaining the overall
1 Facial Animation System
[Curio et al. 2006] proposed a performance-driven facial animation
system. The semantics of an actor’s movement are transferred from
motion capture recordings to a corresponding model of 3D scans.
This correspondence is established with parallel blendshapes. A 3D
head model is animated by morphing a linear combination of the
3D scanned blendshapes. The weights for the linear combination
have been previously determined by facial motion analysis. Analy-
sis consists of removal of rigid head motion from the motion cap-
ture data, construction of the mo