Helping machines play with programmes
As part of work on developing 91Èȱ¬ we have been looking at how we can make the data available for other development teams outside the 91Èȱ¬. And at last weeks and presented a paper outlining our work to date and some of our future plans.
We have been following the approach - namely thinking of URIs as more than just locations for documents. Instead using them to identify anything, from a particular person to a particular programme. These resources in-turn have representations, which can be machine-processable (through the use of RDF, Microformats, RDFa, etc.), and these representations can hold links towards further web resources, allowing agents to jump from one dataset to another.
To date our work on Programmes has been focused on providing persistent URLs to HTML documents of our primary objects : episodes, series and programme brands. However, we have also been looking at how we can make this data available to machines. So what does this look like?
For starters the HTML is marked up with and to help the machine identify schedules and cast and crew information. Semantic web with a small 's' if you will. But more interesting is our work on alternate data serializations - big 'S' Semantic Web.
As discussed in our presentation we have developed an to describe programmes; and we are now working to make the data available in a variety of formats: XML, Atom, RSS 2, JSON, YAML, RDF. Although, currently we only have XML, YAML and JSON views for schedules i.e. the following urls:
/:service/programmes/schedules/:outlet
/:service/programmes/schedules/{:outlet/}:year/:month/:day
/:service/programmes/schedules/{:outlet/}:year/:month/:day/ataglance
/:service/programmes/schedules/{:outlet/}:year/:week
/:service/programmes/schedules/{:outlet/}yesterday
/:service/programmes/schedules/{:outlet/}today
/:service/programmes/schedules/{:outlet/}tomorrow
/:service/programmes/schedules/{:outlet/}last_week
/:service/programmes/schedules/{:outlet/}this_week
/:service/programmes/schedules/{:outlet/}next_week
To access these add .xml .yaml or .json to the end of the url. For example the xml serialization for the Radio 1 schedule is:
/radio1/programmes/schedules.xml
The JSON serialization for today's Radio 4 schedule is:
/radio4/programmes/schedules/fm/today.json
We've also done some work on the representation. These aren't live yet but can be accessed on a development server at : . We would love hear what you think about what we've done before we make the service live.
I should also point out that all of this is licensed under the same as Backstage i.e. you can only use these APIs for non-commercial use. And to give you an idea of the sort of thing you could build have a look at this screen cast.
or .
What you are looking at is a demo put together that takes the Programme Ontology over XMPP and integrates that with notification so that you get notified when a programme starts.
The first message displays the metadata coming over XMPP.
The second message grabs the brand short synopsis from the linked data from the 91Èȱ¬ Programmes linked data RDF.
The third message grabs the radio network's first air date from DBpedia.
What's also nice about this is the additional data being pulled in from . The information about when the service started broadcasting is not coming from a 91Èȱ¬ database - it's coming from BDpedia - we are able to link DBpedia to the Service within the Programmes Ontology and thereby pull in additional data from an external source.
Anyway I hope you like what we've done - but in any case all comments are most welcome.
Tom has also written a review of XTech on the 91Èȱ¬ Internet blog.
Comment number 1.
At 14th May 2008, Tim Regan wrote:Great work, thanks. When you say "To date our work on Programmes has been focused on providing persistent URLs" how does that square with the seven day availability of most of the content. "Persistent" often means longer than seven days. Many of the social computing applications that excite me (e.g. Find Listen Label )rely on the content's continued availability.
Complain about this comment (Comment number 1)