91热爆

Looking into the EAR - Turning Audio Research Into Practice

Recent achievements helping to change the audio industry, including the award-winning EBU ADM Renderer.

Published: 14 June 2018
  • Chris Pike (MEng PhD)

    Chris Pike (MEng PhD)

    Lead R&D Engineer - Audio
  • Tom Nixon

    Project R&D Engineer

We recently gave an update on some of the great work happening within the 91热爆 Audio Research Partnership. The audio team here at 91热爆 R&D is working to turn the latest research into practice for the 91热爆 and its audiences. We've had some great success lately, including winning the EBU Technology & Innovation award for a project we've worked on called the EBU ADM Renderer (EAR). This post gives a little more insight into this side of our work and particularly the EAR project

Besides the Audio Research Partnership, working in the 91热爆 R&D audio team involves developing and evaluating production tools, running listening tests, supporting production trials, and developing training resources with the 91热爆 Academy. We also do a great deal of collaborative work with industry partners and in standardisation bodies.

Orpheus Project

We鈥檙e just coming to the end of the , which is a 3-year EU-funded collaboration with industry partners from across Europe, aiming to make object-based audio a practical reality. We鈥檝e talked before about the that we built in Broadcasting House as part of this project. The project has now produced a report describing the end-to-end object-based audio system architecture that it developed, which has been published by the European Broadcasting Union (EBU) as . We don鈥檛 promise that it gives the answer to life, the universe and everything, but if you want to know more about applying object-based audio in practice, it鈥檚 a good place to start! The project also held a one-day workshop in Munich to share the outcomes with the rest of the industry. Videos of all the talks are available on the .

The Audio Definition Model (ADM)

A lot of our work in this area involves the Audio Definition Model (ADM), but what exactly is it? The ADM is a data model for describing audio experiences. When put like that it might not sound that useful, but if it鈥檚 stored along with audio in a WAV (BWAV/BW64) file this can be thought of as a file format for storing next-generation audio content. It can be used to say simple things about channel-based audio like 鈥渢his file contains stereo content鈥 or 鈥渢his file contains 5.1 content鈥, replacing legacy WAV metadata, as well as supporting newer types of content like scene-based audio (often called higher-order ambisonics, or HOA) and object-based audio.

In addition to information about the audio, ADM metadata can represent programme information (title, language etc.), and has an object and interactivity model which can be used to build experiences which adapt to the requirements and desires of individual listeners.

The ADM is an open standard , so it can be implemented and used by anyone. We鈥檇 like to see the ADM adopted as an interchange format for programme production and delivery, and it鈥檚 great to see this starting to happen: Avid Pro Tools has , and MAGIX has as part of the Orpheus project, but there鈥檚 still a lot of work to be done.

On our side we鈥檙e continuing to work to improve the ADM, to standardise a form of ADM metadata which can be serialised and sent over a network to allow live production of ADM content, and to define ADM 鈥減rofiles鈥 鈥 agreed subsets of the ADM which should be used for specific applications.

We鈥檙e also continuing to build tools for working with the ADM, for example鈥

The EBU ADM Renderer

We have also been collaborating within the EBU to create the EBU ADM Renderer (a.k.a. the EAR). This is a system for rendering the types of content defined by the ADM (channel-based, object-based, scene-based) to any defined loudspeaker system. We have worked with our partners (, , and the ) to release this specification () with an accompanying open-source reference implementation. If you read about last week鈥檚 EBU Technical Assembly held here in Salford, you might have noticed that ! We鈥檙e thrilled to have won this award with our partners and are hopeful that the industry adopts the EAR in future.

The EBU ADM Renderer is part of our wider efforts to standardise open formats for working with so-called 鈥渘ext-generation audio鈥 (NGA). In addition to the Audio Definition Model (ADM) as a way of representing NGA audio formats, a renderer (such as the EAR) is an important piece of the puzzle, since it defines what the parameters in the format definition mean in terms of signals that are played out of the speakers. We are currently working within the ITU-R to standardise rendering techniques; we expect the EAR to be part of that, and eventually, to end up as the spatial audio renderer in a wide range of tools, such as digital audio workstations and mixing consoles. For more insight into the EAR and how it can be used, our partners at the IRT have created a neat .

As with the ADM, this project is far from over. We鈥檙e planning on releasing the next version of the specification in October to tie off some loose ends, and are starting to work on things like Binaural and HOA as output formats from the renderer, and a real-time implementation.

We recently attended the Audio Engineering Society Convention in Milan, where we presented a workshop about the EAR. After dinner the night before the presentation we discovered this sculpture down a tiny back street:



Ear cropped
  •  - 
  • Immersive and Interactive Content section

    IIC section is a group of around 25 researchers, investigating ways of capturing and creating new kinds of audio-visual content, with a particular focus on immersion and interactivity.

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: