Wiggle stereoscopy is an example of stereoscopy in which left and right images of a stereogram are animated. This technique is also called wiggle 3-D, wobble 3-D, wigglegram, or sometimes Piku-Piku (Japanese for "twitching").[1]

Example of wiggle stereoscopy, a street in Cork, Ireland in 1927

The sense of depth from such images is due to parallax and to changes to the occlusion of background objects. In contrast to other stereo display techniques, the same image is presented to both eyes.

Advantages and disadvantages

edit

Wiggle stereoscopy offers the advantages that no glasses or special hardware is required; most people can perceive the effect more quickly than when using cross-eyed and parallel viewing techniques. Furthermore, it offers stereo-like depth to people with limited or no vision in one eye.

Disadvantages of wiggle stereoscopy are that it does not provide true binocular depth perception; it is not suitable for print media, being limited to displays that can alternate between the two images, and it is difficult to appreciate details in images that are constantly in motion.

Number and timing of images

edit

Most wiggle images use only two images, yielding a jerky image. A smoother image can be composed by using several intermediate images and using the left and right images as end images of the image sequence. If intermediate images are not available, approximate images can be computed from the end images using techniques known as view interpolation.[2] The two end images may be displayed for a longer time than the intermediate images to allow inspection of details in the left and right images.

Another option for reducing the impression of jerkiness is to reduce the time between the frames of a wiggle image.[citation needed]

3D photos from a single image

edit
 
An example of monocular portrait images of human faces that have been converted to create a moving 3D photo using depth estimation via Machine Learning using TensorFlow.js[3] in the browser

With advances in machine learning and computer vision,[3] it is now also possible to recreate this effect using a single monocular image as an input.[4]

In this case one can use a segmentation model combined with a depth estimation model[5] to estimate information relating to the distance of the surfaces of objects in the scene from a given viewpoint for every pixel in that image (known as a depth map), and with that information you can then render that pixel data as if it were 3 dimensional to create a subtle 3D effect.

Perception

edit

The sense of depth from wiggle 3-D images is due to parallax and to changes to the occlusion of background objects.[6]

Although wiggle stereoscopy permits the perception of stereoscopic images, it is not a "true" three-dimensional stereoscopic display format in the sense that wiggle stereoscopy does not present the eyes with their own separate view each. The sense of depth may be enhanced by closing one eye, which removes the conflicting vergence visual cue of both eyes looking at the flat image plane.

The apparent stereo effect results from syncing the timing of the wiggle and the amount of parallax to the processing done by the visual cortex. Three or five images with good parallax may produce a better effect than simple left and right images.[citation needed]

Wiggling works for the same reason that a transitional pan (or tracking shot) in a film provides good depth information: the visual cortex is able to infer distance information from motion parallax, the relative speed of the perceived motion of different objects on the screen. Many small animals bob their heads to create motion parallax (wiggling) so they can better estimate distance prior to jumping.[7][8][9]

edit

See also

edit

References

edit
  1. ^ Simulated 3D—Wiggle 3D, Shortcourses.com
  2. ^ For a recent overview of view interpolation techniques, see for example N. Martin, S. Roy: Fast view interpolation from stereo: Simpler can be better (PDF), Proceedings of 3DPTV'08, The Fourth International Symposium on 3-D Data Processing, Visualization and Transmission, June 18–20, 2008, Georgia Institute of Technology, Atlanta, Georgia, USA
  3. ^ a b "Depth Estimation", GitHub
  4. ^ "Depth TensorFlow.js Demo", googleapis.com
  5. ^ "Portrait Depth API: Turning a Single Image into a 3D Photo with TensorFlow.js". Retrieved 2022-07-01.
  6. ^ John Altoft: Data visualization for ESM and ELINT. Visualizing 3-D and hyper-dimensional data, Defense R&D Canada (DRDC) Ottawa, Contract Report 2011-084, June 2011
  7. ^ Scott B. Steinman; Barbara A. Steinman; Ralph Philip Garzia (2000). Foundations of Binocular Vision: A Clinical perspective. McGraw-Hill Medical. p. 180. ISBN 0-8385-2670-5.
  8. ^ Legg, C. R.; Lambert, S. (December 1990). "Distance estimation in the hooded rat: Experimental evidence for the role of motion cues". Behavioural Brain Research. 41 (1): 11–20. doi:10.1016/0166-4328(90)90049-K. PMID 2073352. S2CID 45771715.
  9. ^ Ellard, Colin Godfrey; Goodale, Melvin A.; Timney, Brian (October 1984). "Distance estimation in the Mongolian gerbil: the role of dynamic depth cues". Behavioural Brain Research. 14 (1): 29–39. doi:10.1016/0166-4328(84)90017-2. PMID 6518079. S2CID 15508633.
edit