Motion capture

(Redirected from Head tracking)

Motion capture (sometimes referred as mo-cap or mocap, for short) is the process of recording the movement of objects or people. It is used in military, entertainment, sports, medical applications, and for validation of computer vision[3] and robots.[4] In films, television shows and video games, motion capture refers to recording actions of human actors and using that information to animate digital character models in 2D or 3D computer animation.[5][6][7] When it includes face and fingers or captures subtle expressions, it is often referred to as performance capture.[8] In many fields, motion capture is sometimes called motion tracking, but in filmmaking and games, motion tracking usually refers more to match moving.

Motion capture of two pianists' right hands playing the same piece (slow-motion, no-sounds)[1]
Two repetitions of a walking sequence recorded using motion capture[2]

In motion capture sessions, movements of one or more actors are sampled many times per second. Whereas early techniques used images from multiple cameras to calculate 3D positions,[9] often the purpose of motion capture is to record only the movements of the actor, not their visual appearance. This animation data is mapped to a 3D model so that the model performs the same actions as the actor. This process may be contrasted with the older technique of rotoscoping.

Camera movements can also be motion captured so that a virtual camera in the scene will pan, tilt or dolly around the stage driven by a camera operator while the actor is performing. At the same time, the motion capture system can capture the camera and props as well as the actor's performance. This allows the computer-generated characters, images and sets to have the same perspective as the video images from the camera. A computer processes the data and displays the movements of the actor, providing the desired camera positions in terms of objects in the set. Retroactively obtaining camera movement data from the captured footage is known as match moving or camera tracking.

The first virtual actor animated by motion-capture was produced in 1993 by Didier Pourcel and his team at Gribouille. It involved "cloning" the body and face of French comedian Richard Bohringer, and then animating it with still-nascent motion-capture tools.

Advantages

edit

Motion capture offers several advantages over traditional computer animation of a 3D model:

  • Low latency, close to real-time results can be obtained. In entertainment applications, this can reduce the costs of keyframe-based animation.[10] The Hand Over technique is an example of this.
  • The amount of work does not vary with the complexity or length of the performance to the same degree as when using traditional techniques. This allows many tests to be done with different styles or deliveries, giving a distinct personality that is only limited by the talent of the actor.
  • Complex movement and realistic physical interactions such as secondary motions, weight, and exchange of forces can be easily recreated in a physically accurate manner.[11]
  • The amount of animation data that can be produced within a given time is extremely large when compared to traditional animation techniques. This contributes to both cost-effectiveness and meeting production deadlines.[12]
  • Potential for free software and third-party solutions reducing its costs.

Disadvantages

edit
  • Specific hardware and special software programs are required to obtain and process the data.
  • The cost of the software, equipment and personnel required can be prohibitive for small productions.
  • The capture system may have specific requirements for the space in which it is operated, depending on camera field of view or magnetic distortion.
  • When problems occur, it is easier to shoot the scene again rather than trying to manipulate the data. Only a few systems allow real-time viewing of the data to decide if the take needs to be redone.
  • The initial results are limited to what can be performed within the capture volume without extra editing of the data.
  • Movement that does not follow the laws of physics cannot be captured.
  • Traditional animation techniques, such as added emphasis on anticipation and follow through, secondary motion or manipulating the shape of the character, as with squash and stretch animation techniques, must be added later.
  • If the computer model has different proportions from the capture subject, artifacts may occur. For example, if a cartoon character has large, oversized hands, these may intersect the character's body if the human performer is not careful with their physical motion.

Applications

edit
 
Motion capture performers from Buckinghamshire New University

There are many applications of Motion Capture. The most common are for video games, movies, and movement capture, however there is a research application for this technology being used at Purdue University in robotics development.

Video Games

edit

Video games often use motion capture to animate athletes, martial artists, and other in-game characters.[13][14] As early as 1988, an early form of motion capture was used to animate the 2D player characters of Martech's video game Vixen (performed by model Corinne Russell)[15] and Magical Company's 2D arcade fighting game Last Apostle Puppet Show (to animate digitized sprites).[16] Motion capture was later notably used to animate the 3D character models in the Sega Model arcade games Virtua Fighter (1993)[17][18] and Virtua Fighter 2 (1994).[19] In mid-1995, developer/publisher Acclaim Entertainment had its own in-house motion capture studio built into its headquarters.[14] Namco's 1995 arcade game Soul Edge used passive optical system markers for motion capture.[20] Motion capture also uses athletes in based-off animated games, such as Naughty Dog's Crash Bandicoot, Insomniac Games' Spyro the Dragon, and Rare's Dinosaur Planet.

Robotics

edit

Indoor positioning is another application for optical motion capture systems. Robotics researchers often use motion capture systems when developing and evaluating control, estimation, and perception algorithms and hardware. In outdoor spaces, it’s possible to achieve accuracy to the centimeter by using the Global Navigation Satellite System (GNSS) together with Real-Time Kinematics (RTK). However, this reduces significantly when there is no line-of-sight to the satellites — such as in indoor environments. The majority of vendors selling commercial optical motion capture systems provide accessible open source drivers that integrate with the popular Robotic Operating System (ROS) framework, allowing researchers and developers to effectively test their robots during development.

In the field of aerial robotics research, motion capture systems are widely used for positioning as well. Regulations on airspace usage limit how feasible outdoor experiments can be conducted with Unmanned Aerial Systems (UAS). Indoor tests can circumvent such restrictions. Many labs and institutions around the world have built indoor motion capture volumes for this purpose.

Purdue University houses the world’s largest indoor motion capture system, inside the Purdue UAS Research and Test (PURT) facility. PURT is dedicated to UAS research, and provides tracking volume of 600,000 cubic feet using 60 motion capture cameras.[21] The optical motion capture system is able to track targets in its volume with millimeter accuracy, effectively providing the true position of targets — the “ground truth” baseline in research and development. Results derived from other sensors and algorithms can then be compared to the ground truth data to evaluate their performance.

Movies

edit

Movies use motion capture for CGI effects, in some cases replacing traditional cel animation, and for completely CGI creatures, such as Gollum, The Mummy, King Kong, Davy Jones from Pirates of the Caribbean, the Na'vi from the film Avatar, and Clu from Tron: Legacy. The Great Goblin, the three Stone-trolls, many of the orcs and goblins in the 2012 film The Hobbit: An Unexpected Journey, and Smaug were created using motion capture.

The film Batman Forever (1995) used some motion capture for certain visual effects. Warner Bros. had acquired motion capture technology from arcade video game company Acclaim Entertainment for use in the film's production.[22] Acclaim's 1995 video game of the same name also used the same motion capture technology to animate the digitized sprite graphics.[23]

Star Wars: Episode I – The Phantom Menace (1999) was the first feature-length film to include a main character created using motion capture (that character being Jar Jar Binks, played by Ahmed Best), and Indian-American film Sinbad: Beyond the Veil of Mists (2000) was the first feature-length film made primarily with motion capture, although many character animators also worked on the film, which had a very limited release. 2001's Final Fantasy: The Spirits Within was the first widely released movie to be made with motion capture technology. Despite its poor box-office intake, supporters of motion capture technology took notice. Total Recall had already used the technique, in the scene of the x-ray scanner and the skeletons.

The Lord of the Rings: The Two Towers was the first feature film to utilize a real-time motion capture system. This method streamed the actions of actor Andy Serkis into the computer-generated imagery skin of Gollum / Smeagol as it was being performed.[24]

Storymind Entertainment, which is an independent Ukrainian studio, created a neo-noir third-person / shooter video game called My Eyes On You, using motion capture in order to animate its main character, Jordan Adalien, and along with non-playable characters.[25]

Out of the three nominees for the 2006 Academy Award for Best Animated Feature, two of the nominees (Monster House and the winner Happy Feet) used motion capture, and only Disney·Pixar's Cars was animated without motion capture. In the ending credits of Pixar's film Ratatouille, a stamp appears labelling the film as "100% Genuine Animation – No Motion Capture!"

Since 2001, motion capture has been used extensively to simulate or approximate the look of live-action theater, with nearly photorealistic digital character models. The Polar Express used motion capture to allow Tom Hanks to perform as several distinct digital characters (in which he also provided the voices). The 2007 adaptation of the saga Beowulf animated digital characters whose appearances were based in part on the actors who provided their motions and voices. James Cameron's highly popular Avatar used this technique to create the Na'vi that inhabit Pandora. The Walt Disney Company has produced Robert Zemeckis's A Christmas Carol using this technique. In 2007, Disney acquired Zemeckis' ImageMovers Digital (that produces motion capture films), but then closed it in 2011, after a box office failure of Mars Needs Moms.

Television series produced entirely with motion capture animation include Laflaque in Canada, Sprookjesboom and Cafe de Wereld [nl] in The Netherlands, and Headcases in the UK.

Movement Capture

edit

Virtual reality and Augmented reality providers, such as uSens and Gestigon, allow users to interact with digital content in real time by capturing hand motions. This can be useful for training simulations, visual perception tests, or performing virtual walk-throughs in a 3D environment. Motion capture technology is frequently used in digital puppetry systems to drive computer-generated characters in real time.

Gait analysis is one application of motion capture in clinical medicine. Techniques allow clinicians to evaluate human motion across several biomechanical factors, often while streaming this information live into analytical software.

One innovative use is pose detection, which can empower patients during post-surgical recovery or rehabilitation after injuries. This approach enables continuous monitoring, real-time guidance, and individually tailored programs to enhance patient outcomes.[26]

Some physical therapy clinics utilize motion capture as an objective way to quantify patient progress.[27]

During the filming of James Cameron's Avatar all of the scenes involving motion capture were directed in real-time using Autodesk MotionBuilder software to render a screen image which allowed the director and the actor to see what they would look like in the movie, making it easier to direct the movie as it would be seen by the viewer. This method allowed views and angles not possible from a pre-rendered animation. Cameron was so proud of his results that he invited Steven Spielberg and George Lucas on set to view the system in action.

In Marvel's The Avengers, Mark Ruffalo used motion capture so he could play his character the Hulk, rather than have him be only CGI as in previous films, making Ruffalo the first actor to play both the human and the Hulk versions of Bruce Banner.

FaceRig software uses facial recognition technology from ULSee.Inc to map a player's facial expressions and the body tracking technology from Perception Neuron to map the body movement onto a 2D or 3D character's motion on-screen.[28][29]

During Game Developers Conference 2016 in San Francisco Epic Games demonstrated full-body motion capture live in Unreal Engine. The whole scene, from the upcoming game Hellblade about a woman warrior named Senua, was rendered in real-time. The keynote[30] was a collaboration between Unreal Engine, Ninja Theory, 3Lateral, Cubic Motion, IKinema and Xsens.

In 2020, the two-time Olympic figure skating champion Yuzuru Hanyu graduated from Waseda University. In his thesis, using data provided by 31 sensors placed on his body, he analysed his jumps. He evaluated the use of technology both in order to improve the scoring system and to help skaters improve their jumping technique.[31][32] In March 2021 a summary of the thesis was published in the academic journal.[33]

Methods and systems

edit
 
Reflective markers attached to skin to identify body landmarks and the 3D motion of body segments
 
Silhouette tracking

Motion tracking or motion capture started as a photogrammetric analysis tool in biomechanics research in the 1970s and 1980s, and expanded into education, training, sports and recently computer animation for television, cinema, and video games as the technology matured. Since the 20th century, the performer has to wear markers near each joint to identify the motion by the positions or angles between the markers. Acoustic, inertial, LED, magnetic or reflective markers, or combinations of any of these, are tracked, optimally at least two times the frequency rate of the desired motion. The resolution of the system is important in both the spatial resolution and temporal resolution as motion blur causes almost the same problems as low resolution. Since the beginning of the 21st century - and because of the rapid growth of technology - new methods have been developed. Most modern systems can extract the silhouette of the performer from the background. Afterwards all joint angles are calculated by fitting in a mathematical model into the silhouette. For movements you can not see a change of the silhouette, there are hybrid systems available that can do both (marker and silhouette), but with less marker.[citation needed] In robotics, some motion capture systems are based on simultaneous localization and mapping.[34]

Optical systems

edit

Optical systems utilize data captured from image sensors to triangulate the 3D position of a subject between two or more cameras calibrated to provide overlapping projections. Data acquisition is traditionally implemented using special markers attached to an actor; however, more recent systems are able to generate accurate data by tracking surface features identified dynamically for each particular subject. Tracking a large number of performers or expanding the capture area is accomplished by the addition of more cameras. These systems produce data with three degrees of freedom for each marker, and rotational information must be inferred from the relative orientation of three or more markers; for instance shoulder, elbow and wrist markers providing the angle of the elbow. Newer hybrid systems are combining inertial sensors with optical sensors to reduce occlusion, increase the number of users and improve the ability to track without having to manually clean up data.[35]

Passive markers

edit
 
A dancer wearing a suit used in an optical motion capture system
 
Markers are placed at specific points on an actor's face during facial optical motion capture.

Passive optical systems use markers coated with a retroreflective material to reflect light that is generated near the camera's lens. The camera's threshold can be adjusted so only the bright reflective markers will be sampled, ignoring skin and fabric.

The centroid of the marker is estimated as a position within the two-dimensional image that is captured. The grayscale value of each pixel can be used to provide sub-pixel accuracy by finding the centroid of the Gaussian.

An object with markers attached at known positions is used to calibrate the cameras and obtain their positions, and the lens distortion of each camera is measured. If two calibrated cameras see a marker, a three-dimensional fix can be obtained. Typically a system will consist of around 2 to 48 cameras. Systems of over three hundred cameras exist to try to reduce marker swap. Extra cameras are required for full coverage around the capture subject and multiple subjects.

Vendors have constraint software to reduce the problem of marker swapping since all passive markers appear identical. Unlike active marker systems and magnetic systems, passive systems do not require the user to wear wires or electronic equipment.[36] Instead, hundreds of rubber balls are attached with reflective tape, which needs to be replaced periodically. The markers are usually attached directly to the skin (as in biomechanics), or they are velcroed to a performer wearing a full-body spandex/lycra suit designed specifically for motion capture. This type of system can capture large numbers of markers at frame rates usually around 120 to 160 fps although by lowering the resolution and tracking a smaller region of interest they can track as high as 10,000 fps.

Active marker

edit
 
Body motion capture

Active optical systems triangulate positions by illuminating one LED at a time very quickly or multiple LEDs with software to identify them by their relative positions, somewhat akin to celestial navigation. Rather than reflecting light back that is generated externally, the markers themselves are powered to emit their own light. Since the inverse square law provides one quarter of the power at two times the distance, this can increase the distances and volume for capture. This also enables a high signal-to-noise ratio, resulting in very low marker jitter and a resulting high measurement resolution (often down to 0.1 mm within the calibrated volume).

The TV series Stargate SG1 produced episodes using an active optical system for the VFX allowing the actor to walk around props that would make motion capture difficult for other non-active optical systems.[citation needed]

ILM used active markers in Van Helsing to allow capture of Dracula's flying brides on very large sets similar to Weta's use of active markers in Rise of the Planet of the Apes. The power to each marker can be provided sequentially in phase with the capture system providing a unique identification of each marker for a given capture frame at a cost to the resultant frame rate. The ability to identify each marker in this manner is useful in real-time applications. The alternative method of identifying markers is to do it algorithmically requiring extra processing of the data.

There are also possibilities to find the position by using colored LED markers. In these systems, each color is assigned to a specific point of the body.

One of the earliest active marker systems in the 1980s was a hybrid passive-active mocap system with rotating mirrors and colored glass reflective markers and which used masked linear array detectors.

Time modulated active marker

edit
 
A high-resolution uniquely identified active marker system with 3,600 × 3,600 resolution at 960 hertz providing real time submillimeter positions

Active marker systems can further be refined by strobing one marker on at a time, or tracking multiple markers over time and modulating the amplitude or pulse width to provide marker ID. 12-megapixel spatial resolution modulated systems show more subtle movements than 4-megapixel optical systems by having both higher spatial and temporal resolution. Directors can see the actor's performance in real-time, and watch the results on the motion capture-driven CG character. The unique marker IDs reduce the turnaround, by eliminating marker swapping and providing much cleaner data than other technologies. LEDs with onboard processing and radio synchronization allow motion capture outdoors in direct sunlight while capturing at 120 to 960 frames per second due to a high-speed electronic shutter. Computer processing of modulated IDs allows less hand cleanup or filtered results for lower operational costs. This higher accuracy and resolution requires more processing than passive technologies, but the additional processing is done at the camera to improve resolution via subpixel or centroid processing, providing both high resolution and high speed. These motion capture systems typically cost $20,000 for an eight-camera, 12-megapixel spatial resolution 120-hertz system with one actor.

 
IR sensors can compute their location when lit by mobile multi-LED emitters, e.g. in a moving car. With Id per marker, these sensor tags can be worn under clothing and tracked at 500 Hz in broad daylight.

Semi-passive imperceptible marker

edit

One can reverse the traditional approach based on high-speed cameras. Systems such as Prakash use inexpensive multi-LED high-speed projectors. The specially built multi-LED IR projectors optically encode the space. Instead of retro-reflective or active light emitting diode (LED) markers, the system uses photosensitive marker tags to decode the optical signals. By attaching tags with photo sensors to scene points, the tags can compute not only their own locations of each point, but also their own orientation, incident illumination, and reflectance.

These tracking tags work in natural lighting conditions and can be imperceptibly embedded in attire or other objects. The system supports an unlimited number of tags in a scene, with each tag uniquely identified to eliminate marker reacquisition issues. Since the system eliminates a high-speed camera and the corresponding high-speed image stream, it requires significantly lower data bandwidth. The tags also provide incident illumination data which can be used to match scene lighting when inserting synthetic elements. The technique appears ideal for on-set motion capture or real-time broadcasting of virtual sets but has yet to be proven.

Underwater motion capture system

edit

Motion capture technology has been available for researchers and scientists for a few decades, which has given new insight into many fields.

Underwater cameras

edit

The vital part of the system, the underwater camera, has a waterproof housing. The housing has a finish that withstands corrosion and chlorine which makes it perfect for use in basins and swimming pools. There are two types of cameras. Industrial high-speed cameras can also be used as infrared cameras. Infrared underwater cameras come with a cyan light strobe instead of the typical IR light for minimum fall-off underwater and high-speed cameras with an LED light or with the option of using image processing.

 
Underwater motion capture camera
 
Motion tracking in swimming by using image processing
Measurement volume
edit

An underwater camera is typically able to measure 15–20 meters depending on the water quality, the camera and the type of marker used. Unsurprisingly, the best range is achieved when the water is clear, and like always, the measurement volume is also dependent on the number of cameras. A range of underwater markers are available for different circumstances.

Tailored
edit

Different pools require different mountings and fixtures. Therefore, all underwater motion capture systems are uniquely tailored to suit each specific pool instalment. For cameras placed in the center of the pool, specially designed tripods, using suction cups, are provided.

Markerless

edit

Emerging techniques and research in computer vision are leading to the rapid development of the markerless approach to motion capture. Markerless systems such as those developed at Stanford University, the University of Maryland, MIT, and the Max Planck Institute, do not require subjects to wear special equipment for tracking. Special computer algorithms are designed to allow the system to analyze multiple streams of optical input and identify human forms, breaking them down into constituent parts for tracking. ESC entertainment, a subsidiary of Warner Brothers Pictures created especially to enable virtual cinematography, including photorealistic digital look-alikes for filming The Matrix Reloaded and The Matrix Revolutions movies, used a technique called Universal Capture that utilized 7 camera setup and the tracking the optical flow of all pixels over all the 2-D planes of the cameras for motion, gesture and facial expression capture leading to photorealistic results.

Traditional systems

edit

Traditionally markerless optical motion tracking is used to keep track of various objects, including airplanes, launch vehicles, missiles and satellites. Many such optical motion tracking applications occur outdoors, requiring differing lens and camera configurations. High-resolution images of the target being tracked can thereby provide more information than just motion data. The image obtained from NASA's long-range tracking system on the space shuttle Challenger's fatal launch provided crucial evidence about the cause of the accident. Optical tracking systems are also used to identify known spacecraft and space debris despite the fact that it has a disadvantage compared to radar in that the objects must be reflecting or emitting sufficient light.[37]

An optical tracking system typically consists of three subsystems: the optical imaging system, the mechanical tracking platform and the tracking computer.

The optical imaging system is responsible for converting the light from the target area into a digital image that the tracking computer can process. Depending on the design of the optical tracking system, the optical imaging system can vary from as simple as a standard digital camera to as specialized as an astronomical telescope on the top of a mountain. The specification of the optical imaging system determines the upper limit of the effective range of the tracking system.

The mechanical tracking platform holds the optical imaging system and is responsible for manipulating the optical imaging system in such a way that it always points to the target being tracked. The dynamics of the mechanical tracking platform combined with the optical imaging system determines the tracking system's ability to keep the lock on a target that changes speed rapidly.

The tracking computer is responsible for capturing the images from the optical imaging system, analyzing the image to extract the target position and controlling the mechanical tracking platform to follow the target. There are several challenges. First, the tracking computer has to be able to capture the image at a relatively high frame rate. This posts a requirement on the bandwidth of the image-capturing hardware. The second challenge is that the image processing software has to be able to extract the target image from its background and calculate its position. Several textbook image-processing algorithms are designed for this task. This problem can be simplified if the tracking system can expect certain characteristics that is common in all the targets it will track. The next problem down the line is controlling the tracking platform to follow the target. This is a typical control system design problem rather than a challenge, which involves modeling the system dynamics and designing controllers to control it. This will however become a challenge if the tracking platform the system has to work with is not designed for real-time.

The software that runs such systems is also customized for the corresponding hardware components. One example of such software is OpticTracker, which controls computerized telescopes to track moving objects at great distances, such as planes and satellites. Another option is the software SimiShape, which can also be used hybrid in combination with markers.

RGB-D cameras

edit

RGB-D cameras such as Kinect capture both the color and depth images. By fusing the two images, 3D colored voxels can be captured, allowing motion capture of 3D human motion and human surface in real-time.

Because of the use of a single-view camera, motions captured are usually noisy. Machine learning techniques have been proposed to automatically reconstruct such noisy motions into higher quality ones, using methods such as lazy learning[38] and Gaussian models.[39] Such method generates accurate enough motion for serious applications like ergonomic assessment.[40]

Non-optical systems

edit

Inertial systems

edit

Inertial motion capture[41] technology is based on miniature inertial sensors, biomechanical models and sensor fusion algorithms.[42] The motion data of the inertial sensors (inertial guidance system) is often transmitted wirelessly to a computer, where the motion is recorded or viewed. Most inertial systems use inertial measurement units (IMUs) containing a combination of gyroscope, magnetometer, and accelerometer, to measure rotational rates. These rotations are translated to a skeleton in the software. Much like optical markers, the more IMU sensors the more natural the data. No external cameras, emitters or markers are needed for relative motions, although they are required to give the absolute position of the user if desired. Inertial motion capture systems capture the full six degrees of freedom body motion of a human in real-time and can give limited direction information if they include a magnetic bearing sensor, although these are much lower resolution and susceptible to electromagnetic noise. Benefits of using Inertial systems include: capturing in a variety of environments including tight spaces, no solving, portability, and large capture areas. Disadvantages include lower positional accuracy and positional drift which can compound over time. These systems are similar to the Wii controllers but are more sensitive and have greater resolution and update rates. They can accurately measure the direction to the ground to within a degree. The popularity of inertial systems is rising amongst game developers,[10] mainly because of the quick and easy setup resulting in a fast pipeline. A range of suits are now available from various manufacturers and base prices range from $1000 to US$80,000.

Mechanical motion

edit

Mechanical motion capture systems directly track body joint angles and are often referred to as exoskeleton motion capture systems, due to the way the sensors are attached to the body. A performer attaches the skeletal-like structure to their body and as they move so do the articulated mechanical parts, measuring the performer's relative motion. Mechanical motion capture systems are real-time, relatively low-cost, free from occlusion, and wireless (untethered) systems that have unlimited capture volume. Typically, they are rigid structures of jointed, straight metal or plastic rods linked together with potentiometers that articulate at the joints of the body. These suits tend to be in the $25,000 to $75,000 range plus an external absolute positioning system. Some suits provide limited force feedback or haptic input.

Magnetic systems

edit

Magnetic systems calculate position and orientation by the relative magnetic flux of three orthogonal coils on both the transmitter and each receiver.[43] The relative intensity of the voltage or current of the three coils allows these systems to calculate both range and orientation by meticulously mapping the tracking volume. The sensor output is 6DOF, which provides useful results obtained with two-thirds the number of markers required in optical systems; one on upper arm and one on lower arm for elbow position and angle.[citation needed] The markers are not occluded by nonmetallic objects but are susceptible to magnetic and electrical interference from metal objects in the environment, like rebar (steel reinforcing bars in concrete) or wiring, which affect the magnetic field, and electrical sources such as monitors, lights, cables and computers. The sensor response is nonlinear, especially toward edges of the capture area. The wiring from the sensors tends to preclude extreme performance movements.[43] With magnetic systems, it is possible to monitor the results of a motion capture session in real time.[43] The capture volumes for magnetic systems are dramatically smaller than they are for optical systems. With the magnetic systems, there is a distinction between alternating-current (AC) and direct-current (DC) systems: DC system uses square pulses, AC systems uses sine wave pulse.

Stretch sensors

edit

Stretch sensors are flexible parallel plate capacitors that measure either stretch, bend, shear, or pressure and are typically produced from silicone. When the sensor stretches or squeezes its capacitance value changes. This data can be transmitted via Bluetooth or direct input and used to detect minute changes in body motion. Stretch sensors are unaffected by magnetic interference and are free from occlusion. The stretchable nature of the sensors also means they do not suffer from positional drift, which is common with inertial systems. Stretchable sensors, on the other hands, due to the material properties of their substrates and conducting materials, suffer from relatively low signal-to-noise ratio, requiring filtering or machine learning to make them usable for motion capture. These solutions result in higher latency when compared to alternative sensors.

edit

Facial motion capture

edit

Most traditional motion capture hardware vendors provide for some type of low-resolution facial capture utilizing anywhere from 32 to 300 markers with either an active or passive marker system. All of these solutions are limited by the time it takes to apply the markers, calibrate the positions and process the data. Ultimately the technology also limits their resolution and raw output quality levels.

High-fidelity facial motion capture, also known as performance capture, is the next generation of fidelity and is utilized to record the more complex movements in a human face in order to capture higher degrees of emotion. Facial capture is currently arranging itself in several distinct camps, including traditional motion capture data, blend-shaped based solutions, capturing the actual topology of an actor's face, and proprietary systems.

The two main techniques are stationary systems with an array of cameras capturing the facial expressions from multiple angles and using software such as the stereo mesh solver from OpenCV to create a 3D surface mesh, or to use light arrays as well to calculate the surface normals from the variance in brightness as the light source, camera position or both are changed. These techniques tend to be only limited in feature resolution by the camera resolution, apparent object size and number of cameras. If the users face is 50 percent of the working area of the camera and a camera has megapixel resolution, then sub millimeter facial motions can be detected by comparing frames. Recent work is focusing on increasing the frame rates and doing optical flow to allow the motions to be retargeted to other computer generated faces, rather than just making a 3D Mesh of the actor and their expressions.

Radio frequency positioning

edit

Radio frequency positioning systems are becoming more viable[citation needed] as higher frequency radio frequency devices allow greater precision than older technologies such as radar. The speed of light is 30 centimeters per nanosecond (billionth of a second), so a 10 gigahertz (billion cycles per second) radio frequency signal enables an accuracy of about 3 centimeters. By measuring amplitude to a quarter wavelength, it is possible to improve the resolution down to about 8 mm. To achieve the resolution of optical systems, frequencies of 50 gigahertz or higher are needed, which are almost as dependent on line of sight and as easy to block as optical systems. Multipath and reradiation of the signal are likely to cause additional problems, but these technologies will be ideal for tracking larger volumes with reasonable accuracy, since the required resolution at 100 meter distances is not likely to be as high. Many scientists[who?] believe that radio frequency will never produce the accuracy required for motion capture.

Researchers at Massachusetts Institute of Technology researchers said in 2015 that they had made a system that tracks motion by radio frequency signals.[44]

Non-traditional systems

edit

An alternative approach was developed where the actor is given an unlimited walking area through the use of a rotating sphere, similar to a hamster ball, which contains internal sensors recording the angular movements, removing the need for external cameras and other equipment. Even though this technology could potentially lead to much lower costs for motion capture, the basic sphere is only capable of recording a single continuous direction. Additional sensors worn on the person would be needed to record anything more.

Another alternative is using a 6DOF (Degrees of freedom) motion platform with an integrated omni-directional treadmill with high resolution optical motion capture to achieve the same effect. The captured person can walk in an unlimited area, negotiating different uneven terrains. Applications include medical rehabilitation for balance training, bio-mechanical research and virtual reality.[citation needed]

3D pose estimation

edit

In 3D pose estimation, an actor's pose can be reconstructed from an image or depth map.[45]

See also

edit

References

edit
  1. ^ Goebl, W.; Palmer, C. (2013). Balasubramaniam, Ramesh (ed.). "Temporal Control and Hand Movement Efficiency in Skilled Music Performance". PLOS ONE. 8 (1): e50901. Bibcode:2013PLoSO...850901G. doi:10.1371/journal.pone.0050901. PMC 3536780. PMID 23300946.
  2. ^ Olsen, NL; Markussen, B; Raket, LL (2018), "Simultaneous inference for misaligned multivariate functional data", Journal of the Royal Statistical Society, Series C, 67 (5): 1147–76, arXiv:1606.03295, doi:10.1111/rssc.12276, S2CID 88515233
  3. ^ David Noonan, Peter Mountney, Daniel Elson, Ara Darzi and Guang-Zhong Yang. A Stereoscopic Fibroscope for Camera Motion and 3-D Depth Recovery During Minimally Invasive Surgery. In proc ICRA 2009, pp. 4463–68. http://www.sciweavers.org/external.php?u=http%3A%2F%2Fwww.doc.ic.ac.uk%2F%7Epmountne%2Fpublications%2FICRA%25202009.pdf&p=ieee
  4. ^ Yamane, Katsu, and Jessica Hodgins. "Simultaneous tracking and balancing of humanoid robots for imitating human motion capture data." Intelligent Robots and Systems, 2009. IROS 2009. IEEE/RSJ International Conference on. IEEE, 2009.
  5. ^ NY Castings, Joe Gatt, Motion Capture Actors: Body Movement Tells the Story Archived 2014-07-03 at the Wayback Machine, Accessed June 21, 2014
  6. ^ Andrew Harris Salomon, Feb. 22, 2013, Backstage Magazine, Growth In Performance Capture Helping Gaming Actors Weather Slump, Accessed June 21, 2014, "..But developments in motion-capture technology, as well as new gaming consoles expected from Sony and Microsoft within the year, indicate that this niche continues to be a growth area for actors. And for those who have thought about breaking in, the message is clear: Get busy...."
  7. ^ Ben Child, 12 August 2011, The Guardian, Andy Serkis: why won't Oscars go ape over motion-capture acting? Star of Rise of the Planet of the Apes says performance capture is misunderstood and its actors deserve more respect, Accessed June 21, 2014
  8. ^ Hugh Hart, January 24, 2012, Wired magazine, When will a motion capture actor win an Oscar?, Accessed June 21, 2014, "...the Academy of Motion Picture Arts and Sciences' historic reluctance to honor motion-capture performances ... Serkis, garbed in a sensor-embedded Lycra body suit, quickly mastered the then-novel art and science of performance-capture acting. ..."
  9. ^ Cheung, German KM, et al. "A real time system for robust 3D voxel reconstruction of human motions." Computer Vision and Pattern Recognition, 2000. Proceedings. IEEE Conference on. Vol. 2. IEEE, 2000.
  10. ^ a b "Xsens MVN Animate – Products". Xsens 3D motion tracking. Retrieved 2019-01-22.
  11. ^ "The Next Generation 1996 Lexicon A to Z: Motion Capture". Next Generation. No. 15. Imagine Media. March 1996. p. 37.
  12. ^ "Motion Capture". Next Generation (10). Imagine Media: 50. October 1995.
  13. ^ Jon Radoff, Anatomy of an MMORPG, "Anatomy of an MMORPG". Archived from the original on 2009-12-13. Retrieved 2009-11-30.
  14. ^ a b "Hooray for Hollywood! Acclaim Studios". GamePro. No. 82. IDG. July 1995. pp. 28–29.
  15. ^ Mason, Graeme. "Martech Games - The Personality People". Retro Gamer. No. 133. p. 51.
  16. ^ "Pre-Street Fighter II Fighting Games". Hardcore Gaming 101. p. 8. Retrieved 26 November 2021.
  17. ^ "Sega Saturn exclusive! Virtua Fighter: fighting in the third dimension" (PDF). Computer and Video Games. No. 158 (January 1995). Future plc. 15 December 1994. pp. 12–3, 15–6, 19.
  18. ^ "Virtua Fighter". Maximum: The Video Game Magazine (1). Emap International Limited: 142–3. October 1995.
  19. ^ Wawro, Alex (October 23, 2014). "Yu Suzuki Recalls Using Military Tech to Make Virtua Fighter 2". Gamasutra. Retrieved 18 August 2016.
  20. ^ "History of Motion Capture". Motioncapturesociety.com. Archived from the original on 2018-10-23. Retrieved 2013-08-10.
  21. ^ "Purdue's Home for Drone Systems Engineering". Aerogram Magazine - 2021-2022. Retrieved 2023-09-18.
  22. ^ "Coin-Op News: Acclaim technology tapped for "Batman" movie". Play Meter. Vol. 20, no. 11. October 1994. p. 22.
  23. ^ "Acclaim Stakes its Claim". RePlay. Vol. 20, no. 4. January 1995. p. 71.
  24. ^ Savage, Annaliza (12 July 2012). "Gollum Actor: How New Motion-Capture Tech Improved The Hobbit". Wired. Retrieved 29 January 2017.
  25. ^ "INTERVIEW: Storymind Entertainment Talks About Upcoming 'My Eyes On You'". That Moment In. 29 October 2017. Retrieved 2022-09-24.
  26. ^ "AI-based pose detection for physical rehabilitation software".
  27. ^ "Markerless Motion Capture | EuMotus". Markerless Motion Capture | EuMotus. Retrieved 2018-10-12.
  28. ^ Corriea, Alexa Ray (30 June 2014). "This facial recognition software lets you be Octodad". Retrieved 4 January 2017 – via www.polygon.com.
  29. ^ Plunkett, Luke (27 December 2013). "Turn Your Human Face Into A Video Game Character". kotaku.com. Retrieved 4 January 2017.
  30. ^ "Put your (digital) game face on". fxguide.com. 24 April 2016. Retrieved 4 January 2017.
  31. ^ "羽生結弦"動いたこと"は卒論完成 24時間テレビにリモート出演 (Yuzuru Hanyu completes graduation thesis on "moving things" and appears remotely on TV 24 hours a day)" (in Japanese). Retrieved 2 September 2023.
  32. ^ "羽生結弦が卒業論文を公開 24時間テレビに出演 (Yuzuru Hanyu publishes graduation thesis and appears on 24-hour TV)" (in Japanese). Retrieved 2 September 2023.
  33. ^ Hanyu, Yuzuru; 羽生, 結弦 (18 March 2021). 無線・慣性センサー式モーションキャプチャシステムのフィギュアスケートでの利活用に関するフィージビリティスタディ (A feasibility study on the use of wireless and inertial sensor motion capture systems in figure skating) (Thesis) (in Japanese). Waseda University. Retrieved 2 September 2023.
  34. ^ Sturm, Jürgen, et al. "A benchmark for the evaluation of RGB-D SLAM systems." Intelligent Robots and Systems (IROS), 2012 IEEE/RSJ International Conference on. IEEE, 2012.
  35. ^ Li, Jian; Yang, Jiushan; Xu, Zhanwang; Peng, Jingliang (November 2012). "Computer-assisted hand rehabilitation assessment using an optical motion capture system". 2012 International Conference on Image Analysis and Signal Processing. IEEE. pp. 1–5. doi:10.1109/iasp.2012.6425069. ISBN 978-1-4673-2546-2.
  36. ^ "Motion Capture: Optical Systems". Next Generation (10). Imagine Media: 53. October 1995.
  37. ^ Veis, G. (1963). "Optical tracking of artificial satellites". Space Science Reviews. 2 (2): 250–296. Bibcode:1963SSRv....2..250V. doi:10.1007/BF00216781. S2CID 121533715.
  38. ^ Shum, Hubert P. H.; Ho, Edmond S. L.; Jiang, Yang; Takagi, Shu (2013). "Real-Time Posture Reconstruction for Microsoft Kinect". IEEE Transactions on Cybernetics. 43 (5): 1357–1369. doi:10.1109/TCYB.2013.2275945. PMID 23981562. S2CID 14124193.
  39. ^ Liu, Zhiguang; Zhou, Liuyang; Leung, Howard; Shum, Hubert P. H. (2016). "Kinect Posture Reconstruction based on a Local Mixture of Gaussian Process Models" (PDF). IEEE Transactions on Visualization and Computer Graphics. 22 (11): 2437–2450. doi:10.1109/TVCG.2015.2510000. PMID 26701789. S2CID 216076607.
  40. ^ Plantard, Pierre; Shum, Hubert P. H.; Pierres, Anne-Sophie Le; Multon, Franck (2017). "Validation of an Ergonomic Assessment Method using Kinect Data in Real Workplace Conditions". Applied Ergonomics. 65: 562–569. doi:10.1016/j.apergo.2016.10.015. PMID 27823772. S2CID 13658487.
  41. ^ "Full 6DOF Human Motion Tracking Using Miniature Inertial Sensors" (PDF). Archived from the original (PDF) on 2013-04-04. Retrieved 2013-04-03.
  42. ^ "A history of motion capture". Xsens 3D motion tracking. Archived from the original on 2019-01-22. Retrieved 2019-01-22.
  43. ^ a b c "Motion Capture: Magnetic Systems". Next Generation (10). Imagine Media: 51. October 1995.
  44. ^ Alba, Alejandro (November 2015). "MIT researchers create device that can recognize, track people through walls". nydailynews.com. Retrieved 2019-12-09.
  45. ^ Ye, Mao, et al. "Accurate 3d pose estimation from a single depth image Archived 2020-01-13 at the Wayback Machine." 2011 International Conference on Computer Vision. IEEE, 2011.
edit