Back in June 2018 we reported on the release of the EBU ADM Renderer (EAR), which is specified in . This is a system for rendering the types of content defined by the Audio Definition Model (ADM) to any defined loudspeaker layout.
The EAR is part of our wider efforts to standardise open formats for working with so-called 'next-generation audio' (NGA), which aims to make audio experiences which are more accessible and immersive. In addition to the ADM as a way of representing NGA audio formats, a renderer (such as the EAR) is an important piece of the puzzle, since it defines what the parameters in the format definition mean in terms of signals that are played out of the speakers.
In July, the published , which is based on the EAR algorithm. The ITU ADM Renderer also comes with a python implementation, which allows developers to try out the algorithm easily. While the python implementation is great for experimental work, understanding the algorithm and validating ADM files, it can't be used for real-time applications. Therefore, a C++ library containing the core EAR functionality has been developed.
Libear
We worked with in developing as a collaboration within the EBU. This library is available under the permissive . Libear contains just the core parts of the EAR project (calculation of gains and some DSP components); we recommend using it with and (both developed by IRT) when developing applications which also need to read, write and process ADM content.
There are several potential applications for a library like libear. It could be included in a Digital Audio Workstation (DAW) to render NGA content, either integrated directly or as a suite of plugins. It could be built into a stand-alone ADM monitoring system, or used to render ADM content to legacy formats before emission.
The current release of the library supports channel-based, scene-based and object-based audio, though some parameters are not yet supported; full details can be found in the . The API is already complete, so there's no need to wait before starting integration work.
Do feel free (and it is free!) to , and have a go at building it and integrating it into your applications. You can also .
What's Next?
We'll continue to work on libear in the coming months, implementing the missing features and responding to feedback from users. We've got a few projects using libear already; we'll publish more details when they are released.
- -
- 91热爆 R&D - Casualty, Loud and Clear - Our Accessible and Enhanced Audio Trial
- 91热爆 R&D - 5 live Football Experiment: What We Learned
- Immersive Audio Training and Skills from the 91热爆 Academy including:
- Sound Bites - An Immersive Masterclass
- Sounds Amazing - audio gurus share tips
-
Immersive and Interactive Content section
IIC section is a group of around 25 researchers, investigating ways of capturing and creating new kinds of audio-visual content, with a particular focus on immersion and interactivity.