Body-mounted cameras view surroundings, capture subject's motions

Aug. 10, 2011
Pittsburgh, PA--A new approach to capturing live human motion has been developed by researchers at Disney Research, Pittsburgh (DRP) and Carnegie Mellon University (CMU).

Pittsburgh, PA--A new approach to capturing live human motion has been developed by researchers at Disney Research, Pittsburgh (DRP) and Carnegie Mellon University (CMU). Intended to provide the information needed to animate digital models of actors for movies, the technique takes an approach exactly the reverse of traditional motion-capture approaches. Rather than using a set of external fixed cameras to track the subject's movements, a number of portable wireless cameras are strapped to various locations on the subject: as he or she moves about, the cameras look out at the fixed world around them.

The wearable camera system makes it possible to reconstruct the relative and global motions of the subject, thanks to a process called structure from motion (SfM). Takeo Kanade of CMU, a pioneer in computer vision, developed SfM 20 years ago to determine the 3D structure of an object by analyzing the images from a camera moving around the object, or as the object moves past the camera.

Sparse 3D info on surroundings
In the new motion-capture setup, SfM is not used primarily to analyze objects in a person's surroundings, but to estimate the pose of the cameras on the person. Velcro was used to mount 20 lightweight cell-phone-type cameras on the limbs and trunk of each subject, with the cameras each calibrated with respect to a reference structure. Each person then performed a range-of-motion exercise that allowed the system to automatically build a digital skeleton and estimate positions of cameras with respect to that skeleton.

The SfM software was then used to estimate rough position and orientation of limbs as the subject moved through an environment and to collect sparse 3D information about the environment to provide context for the captured motion. The rough position and orientation of limbs serves as an initial guess for a refinement step that optimizes the configuration of the body and its location in the environment.

The great freedom of the DRP/CMU technique allows motion capture to occur almost anywhere--for example, during a stroll or run through a neighborhood, or even while swinging on monkey bars. And the 3D data acquired by the SfM relies on the natural surroundings, eliminating the need for setting up any fixed external markers.

Motion capture for everyone
As video cameras become ever smaller and cheaper, "I think anyone will be able to do motion capture in the not-so-distant future," said Takaaki Shiratori, a postdoctoral associate at DRP, who presented the new technique this past Monday (August 8) at SIGGRAPH 2011, the International Conference on Computer Graphics and Interactive Techniques (Vancouver, BC, Canada).

The quality of motion capture from body-mounted cameras does not yet match the fidelity of traditional motion capture, Shiratori said, but will improve as the resolution of small video cameras continues to improve. In addition, the technique requires a significant amount of computational power; a minute of motion capture now can require an entire day to process. Future work will include efforts to find computational shortcuts, for example through parallel processing.

For more info and a video, see the DRP/CMU project website.

About the Author

John Wallace | Senior Technical Editor (1998-2022)

John Wallace was with Laser Focus World for nearly 25 years, retiring in late June 2022. He obtained a bachelor's degree in mechanical engineering and physics at Rutgers University and a master's in optical engineering at the University of Rochester. Before becoming an editor, John worked as an engineer at RCA, Exxon, Eastman Kodak, and GCA Corporation.

Sponsored Recommendations

Electroplating 3D Printed Parts

Jan. 24, 2025
In this blog post, you'll learn about plating methods to enhance the engineering performance of resin micro 3D printed parts.

Hexapod 6-DOF Active Optical Alignment Micro-Robots - Enablers for Advanced Camera Manufacturing

Dec. 18, 2024
Optics and camera manufacturing benefits from the flexibility of 6-Axis hexapod active optical alignment robots and advanced motion control software

Laser Assisted Wafer Slicing with 3DOF Motion Stages

Dec. 18, 2024
Granite-based high-performance 3-DOF air bearing nanopositioning stages provide ultra-high accuracy and reliability in semiconductor & laser processing applications.

Steering Light: What is the Difference Between 2-Axis Galvo Scanners and Single Mirror 2-Axis Scanners

Dec. 18, 2024
Advantages and limitations of different 2-axis light steering methods: Piezo steering mirrors, voice-coil mirrors, galvos, gimbal mounts, and kinematic mounts.

Voice your opinion!

To join the conversation, and become an exclusive member of Laser Focus World, create an account today!