Particle-filter tracking of sounds for frequency-independent 3D audio rendering from distributed B-format recordings

Six-Degree-of-Freedom (6DoF) audio rendering interactively synthesizes spatial audio signals for a variable listener perspective based on surround recordings taken at multiple perspectives distributed across the listening area in the acoustic scene. Methods that rely on recording-implicit directiona...

Full description

Bibliographic Details
Main Authors: Blochberger Matthias, Zotter Franz
Format: Article
Language:English
Published: EDP Sciences 2021-01-01
Series:Acta Acustica
Subjects:
Online Access:https://acta-acustica.edpsciences.org/articles/aacus/full_html/2021/01/aacus200070/aacus200070.html
_version_ 1797712455349043200
author Blochberger Matthias
Zotter Franz
author_facet Blochberger Matthias
Zotter Franz
author_sort Blochberger Matthias
collection DOAJ
description Six-Degree-of-Freedom (6DoF) audio rendering interactively synthesizes spatial audio signals for a variable listener perspective based on surround recordings taken at multiple perspectives distributed across the listening area in the acoustic scene. Methods that rely on recording-implicit directional information and interpolate the listener perspective without the attempt of localizing and extracting sounds often yield high audio quality, but are limited in spatial definition. Methods that perform sound localization, extraction, and rendering typically operate in the time-frequency domain and risk introducing artifacts such as musical noise. We propose to take advantage of the rich spatial information recorded in the broadband time-domain signals of the multitude of distributed first-order (B-format) recording perspectives. Broadband time-variant signal extraction retrieving direct signals and leaving residuals to approximate diffuse and spacious sounds is less of a quality risk, and likewise is the broadband re-encoding to enhance spatial definition of both signal types. To detect and track direct sound objects in this process, we combine the directional data recorded at the single perspectives into a volumetric multi-perspective activity map for particle-filter tracking. Our technical and perceptual evaluation confirms that this kind of processing enhances the otherwise limited spatial definition of direct-sound objects of other broadband but signal-independent virtual loudspeaker object (VLO) or Vector-Based Intensity Panning (VBIP) interpolation approaches.
first_indexed 2024-03-12T07:22:07Z
format Article
id doaj.art-45f895dc4e6541bc9a78dbfa6abd21a1
institution Directory Open Access Journal
issn 2681-4617
language English
last_indexed 2024-03-12T07:22:07Z
publishDate 2021-01-01
publisher EDP Sciences
record_format Article
series Acta Acustica
spelling doaj.art-45f895dc4e6541bc9a78dbfa6abd21a12023-09-02T22:17:25ZengEDP SciencesActa Acustica2681-46172021-01-0152010.1051/aacus/2021012aacus200070Particle-filter tracking of sounds for frequency-independent 3D audio rendering from distributed B-format recordingsBlochberger Matthiashttps://orcid.org/0000-0001-7331-7162Zotter Franzhttps://orcid.org/0000-0002-6201-1106Six-Degree-of-Freedom (6DoF) audio rendering interactively synthesizes spatial audio signals for a variable listener perspective based on surround recordings taken at multiple perspectives distributed across the listening area in the acoustic scene. Methods that rely on recording-implicit directional information and interpolate the listener perspective without the attempt of localizing and extracting sounds often yield high audio quality, but are limited in spatial definition. Methods that perform sound localization, extraction, and rendering typically operate in the time-frequency domain and risk introducing artifacts such as musical noise. We propose to take advantage of the rich spatial information recorded in the broadband time-domain signals of the multitude of distributed first-order (B-format) recording perspectives. Broadband time-variant signal extraction retrieving direct signals and leaving residuals to approximate diffuse and spacious sounds is less of a quality risk, and likewise is the broadband re-encoding to enhance spatial definition of both signal types. To detect and track direct sound objects in this process, we combine the directional data recorded at the single perspectives into a volumetric multi-perspective activity map for particle-filter tracking. Our technical and perceptual evaluation confirms that this kind of processing enhances the otherwise limited spatial definition of direct-sound objects of other broadband but signal-independent virtual loudspeaker object (VLO) or Vector-Based Intensity Panning (VBIP) interpolation approaches.https://acta-acustica.edpsciences.org/articles/aacus/full_html/2021/01/aacus200070/aacus200070.html6dof renderingvariable-perspective renderingmulti-perspective audio
spellingShingle Blochberger Matthias
Zotter Franz
Particle-filter tracking of sounds for frequency-independent 3D audio rendering from distributed B-format recordings
Acta Acustica
6dof rendering
variable-perspective rendering
multi-perspective audio
title Particle-filter tracking of sounds for frequency-independent 3D audio rendering from distributed B-format recordings
title_full Particle-filter tracking of sounds for frequency-independent 3D audio rendering from distributed B-format recordings
title_fullStr Particle-filter tracking of sounds for frequency-independent 3D audio rendering from distributed B-format recordings
title_full_unstemmed Particle-filter tracking of sounds for frequency-independent 3D audio rendering from distributed B-format recordings
title_short Particle-filter tracking of sounds for frequency-independent 3D audio rendering from distributed B-format recordings
title_sort particle filter tracking of sounds for frequency independent 3d audio rendering from distributed b format recordings
topic 6dof rendering
variable-perspective rendering
multi-perspective audio
url https://acta-acustica.edpsciences.org/articles/aacus/full_html/2021/01/aacus200070/aacus200070.html
work_keys_str_mv AT blochbergermatthias particlefiltertrackingofsoundsforfrequencyindependent3daudiorenderingfromdistributedbformatrecordings
AT zotterfranz particlefiltertrackingofsoundsforfrequencyindependent3daudiorenderingfromdistributedbformatrecordings