Music Business

CONTEXT: The Future Of Music Streaming And Personalization?

2447235168_0873826bce_bBy Alexandre Passant on APassant.net

With CES starting, I though it would be a good time to reflect on the future of music streaming, and what’s needed to own the space. Not only because the conference holds a session on this very topic, but also because advances in Data Science, wearables, and context-aware computing could bring brand-new experiences regarding how we consume – and discover – music.

The need for discovery and personalisation

Besides a few exclusive artists, such as Thom Yorke on BandCamp, orMetallica on Spotify, mainstream services (DeezerRdioRhapsody,iTunes radioPandora, etc.) tend to have very similar catalogues. As music streaming tends to be a commodity, those services need to find incentives to let users choose them versus competitors.

While one way to do so is to focus on Hi-Fi content (as done by Pono Music or Qobuz), another aspect is to invest more time – both on product and R&D – on personalisation and discovery, in order to be ahead of the pack and own the space. That’s an obvious strategy, and a win-win-win for all parties involved:

  • For consumers, delighted when discovering new artists they’ll love, based on their past behaviours or the streaming habits of their friends; and when figuring out that those platforms really understand what they should listen to next;
  • For artists, escaping the long-tail and hence generating more streams, and a little revenue, but most importantly: having the opportunity to convert casual listeners into super-fans;
  • For streaming services, keeping existing users active and adding new ones; consequently gathering more data and analytics (plays, thumbs-up, social networks, etc.), re-investing this into new product features.

That being said, the music-tech consolidation that happened over the past few months is not surprising: Spotify + EchonestRdio + TasteMakerX, or Songza + Google, etc. Interestingly, they showcase different ways that music discovery can be done: “Big Data” (or, should I say, smart data) for the Echonest, social recommendations for TasteMakerX, or mood-based curation for Songza. But one approach doesn’t fit all, and they’re often combined: let’s look at your Spotify homepage and see the different ways you can discovery music (“Top lists”, “Discover”, etc.) if you’re not convinced about it.

Various ways to discover new music through SpotifyVarious ways to discover new music through Spotify

How hardware and context could help

Considering all those ways to discover music: what’s next? Well, probably a lot.

  • On the one hand, advances in large-scale infrastructures and AI now make possible to run algorithms on billions of data-points – combining existing techniques such as Collaborative Filtering or Natural Language Processing, as well as new experiments on Deep Learning;
  • On the other hand, social networks such as Twitter or Facebook provide a huge amount of signals to identify user tastes, correlations between artists, trends and predictions and more – which could go further that discovery by creating communities through music-based user profiling.

But I think that the most exciting part resides in context-aware recommendations.
Remember Beats’ “The Sentence”? Or Spotify’s activity-based playlist (“Workout”, “Focus”, etc.)? Well, this is all good, but it requires manual input to understand users’ context, expectations and short-term listening goals.

Generating Music with Beat's "The Sentence"Generating Music with Beat’s “The Sentence”

We can soon expect this to be generated automatically for us, using the wide range of sensors we have in our cars, houses, or bodies (from smart watches to Nest appliances), and information we already provided to other services we use daily.

Building the future of context-based music personalisation

What about a Spotify integration with Runkeeper that automatically detects when you’re in the last mile of this race, and plays “Harder Better Faster Stronger” to push yourself through? Or your car’s Rdio automatically playing your friends’ top tracks when you’re joining them at a party recorded in your Google calendar? And, at this particular party, should Nest send signals to Songza / YouTube to play some funky music when it’s calming down and there’s no more energy in the room?

This obviously require some work to make those services talk intelligently to each other. But we’re already there, with the growth of APIs on various fronts (musicdevicesfitness, etc.), and standards such as schema.org and especially their actions module. CES will be a perfect time for wearable manufacturers, streaming services, and data providers to announce some ground-breaking partnerships, putting context as a first-class citizen of music discovery and personalisation. Let’s wait and see!

Share on: