Joint view expansion and filtering for automultiscopic 3D displays

Multi-view autostereoscopic displays provide an immersive, glasses-free 3D viewing experience, but they require correctly filtered content from multiple viewpoints. This, however, cannot be easily obtained with current stereoscopic production pipelines. We provide a practical solution that takes a s...

Full description

Bibliographic Details
Main Authors: Didyk, Piotr (Contributor), Sitthi-Amorn, Pitchaya (Contributor), Freeman, William T. (Contributor), Durand, Fredo (Contributor), Matusik, Wojciech (Contributor)
Other Authors: Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory (Contributor)
Format: Article
Language:English
Published: Association for Computing Machinery, 2014-05-02T19:15:27Z.
Subjects:
Online Access:Get fulltext
LEADER 02271 am a22003013u 4500
001 86390
042 |a dc 
100 1 0 |a Didyk, Piotr  |e author 
100 1 0 |a Massachusetts Institute of Technology. Computer Science and Artificial Intelligence Laboratory  |e contributor 
100 1 0 |a Didyk, Piotr  |e contributor 
100 1 0 |a Sitthi-Amorn, Pitchaya  |e contributor 
100 1 0 |a Freeman, William T.  |e contributor 
100 1 0 |a Durand, Fredo  |e contributor 
100 1 0 |a Matusik, Wojciech  |e contributor 
700 1 0 |a Sitthi-Amorn, Pitchaya  |e author 
700 1 0 |a Freeman, William T.  |e author 
700 1 0 |a Durand, Fredo  |e author 
700 1 0 |a Matusik, Wojciech  |e author 
245 0 0 |a Joint view expansion and filtering for automultiscopic 3D displays 
260 |b Association for Computing Machinery,   |c 2014-05-02T19:15:27Z. 
856 |z Get fulltext  |u http://hdl.handle.net/1721.1/86390 
520 |a Multi-view autostereoscopic displays provide an immersive, glasses-free 3D viewing experience, but they require correctly filtered content from multiple viewpoints. This, however, cannot be easily obtained with current stereoscopic production pipelines. We provide a practical solution that takes a stereoscopic video as an input and converts it to multi-view and filtered video streams that can be used to drive multi-view autostereoscopic displays. The method combines a phase-based video magnification and an interperspective antialiasing into a single filtering process. The whole algorithm is simple and can be efficiently implemented on current GPUs to yield a near real-time performance. Furthermore, the ability to retarget disparity is naturally supported. Our method is robust and works well for challenging video scenes with defocus blur, motion blur, transparent materials, and specularities. We show that our results are superior when compared to the state-of-the-art depth-based rendering methods. Finally, we showcase the method in the context of a real-time 3D videoconferencing system that requires only two cameras. 
520 |a Quanta Computer (Firm) 
520 |a National Science Foundation (U.S.) (NSF IIS-1111415) 
520 |a National Science Foundation (U.S.) (NSF IIS-1116296) 
546 |a en_US 
655 7 |a Article 
773 |t ACM Transactions on Graphics