91热爆

IRFS Weeknotes #265

Singing With Machines, personalised radio user testing, and a new real-time pipeline for speech-to-text-processing.

Published: 25 April 2018
  • Tim Cowlishaw

    Tim Cowlishaw

    Senior Software Engineer

Normally when it's my turn to do the weeknotes I spend a bit of time discussing the work of the Discovery team in more detail, but this week I'm going to break with that tradition, not because there's nothing interesting to report from the world of discovery (spoliers: there's loads), but because I've also been involved in another rather exciting project which came to fruition this week after quite a lot of planning.

'Singing with Machines', our musical spinoff from the 'Talking with Machines' project, began with a between Henry and me back in mid-December, in which we idly wondered what we could do with voice devices to complement some of the 91热爆's more experimental music programming.

This idea was based in two observations. Firstly we supposed that reasonably priced, always-on devices with a network connection, some compute power, a decent speaker and basic interaction could offer a really interesting medium to make interactive sound experiences a more mainstream thing. Secondly, we thought that using voice platforms for artistic work might offer an interesting way of critiquing the politics of these platforms - in particular with regard to surveillance and bias.

Several people around the 91热爆 and elsewhere picked up on this idea with enthusiasm and so we began experimenting with some prototypes. and both generated a lot of interest. We started wondering though - how would musicians and artists who aren't necessarily immersed in this technology approach the idea of creating work for it? We'd been talking to some radio producers about the possibilities of the technology, and who were interested in the idea of a collaboration with an established musician in priniciple, but we didn't feel we yet had a good enough way of demonstrating the ways in which the technology could be used musically.

Therefore we decided to follow a process that's worked very well with us on other projects to generate some more concrete ideas of what this project could become. We decided to seek out a diverse bunch of musical collaborators, and work with them to rapidly produce which would express an idea or intent in the simplest way possible. From there, we could get an idea of which directions seemed most to be most fruitful and worth expanding on.

As a result, back in February, we started talking to four artists (, , and ) with very different interests and practices, but all of whom work with technology in interesting ways, about working with us to produce some music or sound art on the Amazon Alexa platform. We decided to run a one day workshop in which each musician would collaborate with a technologist from our team to produce something audible involving a connected speaker. This would serve as both a of the technology and a way of quickly generating a variety of more concrete demonstrations of its use for music, which we could then use to sell the idea to our colleagues from radio.

It turned out that organising this workshop wasn't as simple as it seems (making sure all the rights are correctly organised in any new commission is difficult enough - doing the same for an entirely new medium is another matter!) but finally, on Friday the 6th of April we all met at Newspeak house to spend the day exploring how we could use smart speakers to create, perform and distribute music in new and interesting ways!

We've got some great AV footage of the day which I'm going to be able to share very soon, and at that point I'll write in more detail about what we created and what we learned from it. However, in the meantime, I can say that we ended up with six very different bits of work, all of which were incredibly interesting in their own way, and all of which have the potential to be taken forward into really interesting fully-formed pieces of work.

What was most surprising to us, however, is the way in which this process offered insights into the possibilities and limitations afforded by the technology that are applicable to our other, non-musical work on smart speaker platforms. By working with people who wouldn't normally work with theses technologies, and who look at them in a different way to ourselves, we can more quickly discover both the opportunities they offer, and the challenges which can arise in working with them. This in itself is a great benefit from working with a diverse bunch of people, and from allowing time to creatively explore technology without a specific end in mind - we'll be doing more of this in the future, for sure.

Elsewhere in the team:

The team are planning a hackday and refining their prototypes for evaluation in May.

Chris has been working on updates to the Media Source Extensions standard with the , and met with the Media Services and Media Player teams in the 91热爆, as well as the R&D Object Based Media team to discuss their needs from future web standards APIs.

Libby and Alicia have been doing further work on their prototypes, adding playlists to the 'children's' prototype and experimenting with adding a voice remote control.

Tristan's been planning the next NewNews sprint - working with external agency who came in to talk at our team meeting a few months back.

Andrew and Joanna have finished their UX Research on our 'Orator' tool for creating voice experiences and are writing up a report.

Barbara and Joanna have been planning a new project on media in autonomous vehicles - going through the findings from their user research and planning directions for us to explore over the next three months.

Over in the Data team, Ben, Misa and Matt have been working on our new real-time pipeline for speech-to-text-processing, and Denise has been working on a new person identifier which combines face recognition with metadata to identify unkown people.

In the Disco team, David, Todd Tim and Jakub have been conducting user testing on our personalised radio prototype, along with Rhiannon, who's joined us from UX&D for this part of the project. We've got a load of really encouraging findings which we're looking forward to sharing soon. Chris has been working on improving the classifiers in our Citron, Starfruit and Vox text-processing systems, and Kristine has been building visualisation of historical music radio data, as a prelude to developing models of editorial decisions on music radio to drive personalised stations.

  • Internet Research and Future Services section

    The Internet Research and Future Services section is an interdisciplinary team of researchers, technologists, designers, and data scientists who carry out original research to solve problems for the 91热爆. Our work focuses on the intersection of audience needs and public service values, with digital media and machine learning. We develop research insights, prototypes and systems using experimental approaches and emerging technologies.

Rebuild Page

The page will automatically reload. You may need to reload again if the build takes longer than expected.

Useful links

Theme toggler

Select a theme and theme mode and click "Load theme" to load in your theme combination.

Theme:
Theme Mode: