3D-Audio Matting, Postediting, and Rerendering from Field Recordings

We present a novel approach to real-time spatial rendering of realistic auditory environments and sound sources recorded live, in the field. Using a set of standard microphones distributed throughout a real-world environment, we record the sound field simultaneously from several locations. After spa...

Full description

Bibliographic Details
Main Authors: Guillaume Lemaitre, Nicolas Tsingos, Emmanuel Gallo
Format: Article
Language:English
Published: SpringerOpen 2007-01-01
Series:EURASIP Journal on Advances in Signal Processing
Online Access:http://dx.doi.org/10.1155/2007/47970
id doaj-e3e8376f630646d7a9ad59df336601ca
record_format Article
spelling doaj-e3e8376f630646d7a9ad59df336601ca2020-11-25T01:04:43ZengSpringerOpenEURASIP Journal on Advances in Signal Processing1687-61721687-61802007-01-01200710.1155/2007/479703D-Audio Matting, Postediting, and Rerendering from Field RecordingsGuillaume LemaitreNicolas TsingosEmmanuel GalloWe present a novel approach to real-time spatial rendering of realistic auditory environments and sound sources recorded live, in the field. Using a set of standard microphones distributed throughout a real-world environment, we record the sound field simultaneously from several locations. After spatial calibration, we segment from this set of recordings a number of auditory components, together with their location. We compare existing time delay of arrival estimation techniques between pairs of widely spaced microphones and introduce a novel efficient hierarchical localization algorithm. Using the high-level representation thus obtained, we can edit and rerender the acquired auditory scene over a variety of listening setups. In particular, we can move or alter the different sound sources and arbitrarily choose the listening position. We can also composite elements of different scenes together in a spatially consistent way. Our approach provides efficient rendering of complex soundscapes which would be challenging to model using discrete point sources and traditional virtual acoustics techniques. We demonstrate a wide range of possible applications for games, virtual and augmented reality, and audio visual post production. http://dx.doi.org/10.1155/2007/47970
collection DOAJ
language English
format Article
sources DOAJ
author Guillaume Lemaitre
Nicolas Tsingos
Emmanuel Gallo
spellingShingle Guillaume Lemaitre
Nicolas Tsingos
Emmanuel Gallo
3D-Audio Matting, Postediting, and Rerendering from Field Recordings
EURASIP Journal on Advances in Signal Processing
author_facet Guillaume Lemaitre
Nicolas Tsingos
Emmanuel Gallo
author_sort Guillaume Lemaitre
title 3D-Audio Matting, Postediting, and Rerendering from Field Recordings
title_short 3D-Audio Matting, Postediting, and Rerendering from Field Recordings
title_full 3D-Audio Matting, Postediting, and Rerendering from Field Recordings
title_fullStr 3D-Audio Matting, Postediting, and Rerendering from Field Recordings
title_full_unstemmed 3D-Audio Matting, Postediting, and Rerendering from Field Recordings
title_sort 3d-audio matting, postediting, and rerendering from field recordings
publisher SpringerOpen
series EURASIP Journal on Advances in Signal Processing
issn 1687-6172
1687-6180
publishDate 2007-01-01
description We present a novel approach to real-time spatial rendering of realistic auditory environments and sound sources recorded live, in the field. Using a set of standard microphones distributed throughout a real-world environment, we record the sound field simultaneously from several locations. After spatial calibration, we segment from this set of recordings a number of auditory components, together with their location. We compare existing time delay of arrival estimation techniques between pairs of widely spaced microphones and introduce a novel efficient hierarchical localization algorithm. Using the high-level representation thus obtained, we can edit and rerender the acquired auditory scene over a variety of listening setups. In particular, we can move or alter the different sound sources and arbitrarily choose the listening position. We can also composite elements of different scenes together in a spatially consistent way. Our approach provides efficient rendering of complex soundscapes which would be challenging to model using discrete point sources and traditional virtual acoustics techniques. We demonstrate a wide range of possible applications for games, virtual and augmented reality, and audio visual post production.
url http://dx.doi.org/10.1155/2007/47970
work_keys_str_mv AT guillaumelemaitre 3daudiomattingposteditingandrerenderingfromfieldrecordings
AT nicolastsingos 3daudiomattingposteditingandrerenderingfromfieldrecordings
AT emmanuelgallo 3daudiomattingposteditingandrerenderingfromfieldrecordings
_version_ 1725196436715339776