User:Hc496/sandbox/Data-Driven Character Animation

Data-driven character animation uses large amount motion capture data to create realistic, controllable character motion. Typically, given a corpus of motion capture data, data-driven character animation constructs a directed graph that encapsulates the connections between motion clips. Once the motion graph is built, the system will automatically find a graph walk that meets the user's specification.

Data-driven character animation relies on vision and learning techniques to make them accessible to a wide audience of users in many different contexts. It's useful in texture synthesis, image matting and compositing, image-based rendering and modeling, appearance acquistition and modeling, motion capture and representation.

The related areas are processing motion capture data, editing, retargeting, blending motions, motion graphs, creating animations, statistical models for generation, combining data with simulation, deforming bodies, interactive control, animation from video, facial animation, hand animation, crowd animation and navigation planning, biomechanics. These areas are expanded below.

Introduction
Data-driven animation using motion capture data has become a standard practice in character animation. A number of techniques have been developed to add flexibility on captured human motion data by editing joint trajectories, warping motion paths, blending a family of parameterized motions, splicing motion segments, and adapting motion to new characters and environments.

Researches
Motion capture has become a premiere technique for animation of humanlike characters. To facilitate its use, researchers have focused on the manipulation of data for editing, retargeting, blending, combining, deforming, and interactive control.

Processing motion capture data


Motion capture aims to record actions of human actors, and use that information to animate digital character models in 2D or 3D computer animation. Many technologies have been developed to take human performance sensor data and produce animations of articulated rigid bodies. These technologies involve using geometric modelling to translate rotation data to joint centers, a robust statistical procedure to determine the optimal skeleton size, and an inverse kinematics optimization to produce desired joint angle trajectories.

The traditional process of motion capture contains gathering data, real-time construction of 3D rigid-body skeleton and offline processing which produces the articulate object. To gather data, one could use either magnetic or optical sensors. To translate the rotation data to joint centers, the internal representation of rotations should use quaternion. Since the motion capture data is noisy and often contains gross errors, the size of the skeleton should be estimated using a robust statistical estimation, such as one step M-estimator with Huber estimator to reduce the influence of outliers.

Inverse kinematics optimization can be used to produce desired angle trajectories. Each frame of data is analyzed to produce a set of joint angles. These joint angles form reasonable piecewise-linear curves (useful for sub-sampling). Fitness function in this process, the deviation between recorded data and the model, is non-convex and can be minimized using Quasi-Newton method BFGS optimization technique.

Other technique uses magnetic motion capture data to determine the joint parameters of an articulated hierarchy. It makes it possible to determine limb lengths, joint locations, and sensor placement for a human subject without external measurements.

Joint angle plus root trajectories are used as input, although this format requires an inherent mapping from the raw data recorded by many popular motion capture set-ups. A solution to this mapping problem from 3D marker position data recorded by optical motion capture systems to joint trajectories for a fixed limb-length skeleton using a forward dynamic model.

Editing
Spacetime constraints are a method for creating character animation. The animator specifies what the character has to do, how the motion should be performed, the characters's physical structure, and the physical resources available to the character to accomplish the motion. This method has been applied in many motion editing researches.

A method for editing a pre-existing motino such that it meets new needs yet preserves as much of the original quality as possible. This approach enables the user to interactively position characters using direct manipulation. A spacetime constraints solver finds these positions while considering the entire motion.

A technique for adapting existing motion of a human-like character to have the desired features that are specified by a set of constraints. The problem can be formulated as a spacetime constraints problem. And the approach combines a hierarchical curve fitting technique with a new inverse kinematics solver.

Retargeting
Motion retargetting is simply editing existing motions to achieve the desired effect. Since motion is difficult to create from scratch using traditional methods, changing existing motions is a quicker way to obtain the goal motion. Some techniques are presented by Gleicher, Choi and Ko.

Motion graph
This is a method for creating realistic, controllable motion. Given a corpus of motion capture data, it constructs a directed graph called a motion graph that encapsulates connections among the database. The motion graph consists both of pieces of original motion and automatically generated transitions.

A system has been proposed that can synthesize novel motion sequences from a database of motion capture examples. This is achieved through learning a statistical model from the captured data which enables realistic synthesis of new movements by sampling the original captured sequences.

Real-time control of three-dimensional avatars (controllable, responsive animated characters) is an important problem in the context of computer games and virtual environments. Avatar animation and control is difficult, however, because a large repertoire of avatar behaviors must be made available, and the user must be able to select from this set of behaviors, possibly with a low-dimensional input device. Data-driven approach was used to obtain a set of rich set of avatar behaviors by collecting an extended, unlabeled sequence of motion data appropriate to the application.

Combining
Data-driven character animation can also be achieved by combining data with simulation.

Characters in many video games and training environments often perform tasks based on modified motion data, but response to unpredicted events is also important in order to maintain realism. This problem of motion synthesis for interactive, humnalike characters was approached by combing dynamic simulation and human motion capture data. It's also possible to track and modify human motion data with dynamic simulation.

Deforming
To create the appearance of realistic human characters, researchers have also focused on body shape, surface appearance, deforming, etc. Using data from 3D scanners, a method has been showed to create models of human body shape.

Facial animation
It seems data-driven character animation can be used on dead and some people are interested in how the dead look. So researches have been done to reanimate the dead, which is to reconstruct the expressive faces from skull data.

Hand animation
The human hand is a masterpiece of mechanical complexity, able to perform fine motor manipulations and powerful work alike. Designing an animatable human hand model that features the abilities of the archetype created by nature requires a great deal of anatomical detail to be modeled. A human hand model was proposed with underlying anatomical structure based on anatomical data.

Crowd animation and navigation planning
Using motion capture data and probabilistic roadmaps, a scheme for planning natural-look locomotion of a biped figure was proposed to facilitate rapid motion prototying and task-level motion generation.