Music Business

5 Music Startup Trends To Follow In 2019 [Cherie Hu]

1In this piece, Cherie Hu look at five observable trends in the world of music industry tech startups, and what these inclinations suggest about the future of the music biz and the ways in which tech companies and their investors understand and relate to the music business.

___________________________

Guest post by Cherie Hu of GetRevue

As many of you may know firsthand, so much of growth and success in the entertainment business is about the right timing. 
I am 100% a beneficiary of timing. I feel extremely fortunate to have started writing about the intersection of music and technology just as those two worlds were starting to warm up to each other again, with respect to partnerships, investments and deals—as opposed to the early 2000s, when the music business was highly critical of Napster and other technologies, treating them largely as scapegoats for their own problems.
My first-ever article about music/tech was published in Forbes on November 5, 2015. In just the ten months leading up to that date,
  1. Spotify had unveiled Discover Weekly, changing how we understand music recommendation and discovery;
  2. Apple MusicTidal and YouTube Music had all launched, crowding the streaming subscription landscape even further;
  3. Pandora had acquired both Next Big Sound and Ticketfly, purportedly making the case for a digital music ecosystem that merged live and streaming data (although both Pandora and Ticketfly ultimately faced much different fates), and
  4. Two brand-new accelerators and incubators—the Nashville Entrepreneur Center’s Project Music and the historic Abbey Road Studios’ Abbey Road Red—had launched to the public, fostering new infrastructure for bridging the gap between local music and tech communities.
Fast-forward four years later, and the momentum around music and tech only seems to be getting quicker today. Techstars Music just launched its third annual accelerator program, Warner Music Group is actively investing in seed-stage startups with their new fund WMG Boost and Scooter Braun and Zach Katz just announced their new music-tech investment group Raised In Space, which will invest anywhere from $500,000 to $5 million in startups solving problems for the music industry.
 
Since we’re still relatively early in 2019, I thought now would be a good time to examine where the music-tech sector might be headed this year—through the prism of Techstars Music, Abbey Road Red and similar startup programs, alongside what is being covered in the media.
 
Below are five trends I see among these programs’ cohorts, and what they reveal about the way tech founders and investors understand the music industry—and, in turn, how artists and music companies could respond. Not all of these trends are “new” per se, but music and tech companies are only now grappling with their potential impact. Again, it’s all about timing.
 
 
1. Democratizing recorded-music advances for unsigned artists.
 
Some of the most crucial startups for the future of the music industry, in terms of business- rather than experience-level innovation, treat artists as their core customers.
 
Across genres and geographics, all of the problems that artists have can arguably be boiled down to two questions: How do I connect better with my fans, and how do I make money to sustain my career?
 
Major labels have gotten a lot of flack with respect to their perceived inefficacy in helping artists solve the money problem. Yet with respect to financing, they still have one key differentiating factor going for them: they remain among the only companies willing to take the enormous risk of handing over one million dollars in upfront cash to an artist they believe in.
 
Streaming services like Spotify and NetEase are also beginning to dole out licensing advances to select artists and managers, but the practice remains far from democratized. Without a significant playlist placement or marketing push, it can take unsigned artists as long as two years to recoup on recording and marketing expenses for a single track from streaming income alone, hindering their ability to invest in more ambitious projects in the absence of another revenue stream.
 
A growing number of startups are trying to change these dynamics by using predictive analytics to democratize royalty advances. For example, The Music Fund—part of this year’s Techstars Music cohort—is building a financial service that will offer artists upfront cash advances for a limited portion of their future back-catalog royalties. Importantly, artists will have control over the percentage and time period (e.g. “I want an advance for 50% of my royalties over the next two years”), and the company calculates its offer after the fact, without claiming any copyright ownership. 
 
The idea is also gaining momentum elsewhere: Open On Sunday claims to offer cash advances on future catalog royalties not only to artists, but also to independent labels and publishers. And last week, free music-distribution platform Amuse launched its new Fast Forward feature, which similarly gives select artists the opportunity to collect their projected royalties as a lump sum up to six months in advance. Amuse takes a small fee off the top, which is determined algorithmically for each individual artist based on perceived risk.
 
As the technology is still early, I personally remain on the fence about whether these initiatives will actually be more advantageous for artists. Nonetheless, I hope this trend encourages more open conversations this year about alternative financing models and resources for the recorded-music sector, beyond the normal streaming payouts and major-label advances we read about in the press. 
 
 
2. Strengthening the bonds between music and health (e.g. fitness, biometrics, therapy).
 
It is no coincidence that indoor cycling company Peloton, a $4 billion business that reportedly has more U.S. customers than SoulCycle, is also now a co-investor in this year’s Techstars Music class.
 
2As I argued last year, fitness and wellness apps have become among the hottest marketing platforms for music. When executed correctly, these partnerships can exploit music’s utilitarian power as a mood and performance enhancer, while preserving the clarity of an artist’s vision, values and creative process at the foreground.
 
Music companies of different sizes will take on fitness for different strategic motives. For instance, Spotify has increasingly prioritized mood-, activity- and context-centric playlists in its homepage curation, and has an active Premium bundle with meditation app Headspace in select European markets.
 
Both of these examples fit into Spotify’s wider attempt not just to capture a more casual, mass-market listener base, but also to position itself as a service that literally makes users healthier, physically and emotionally. As a group of media professors wrote in their new book Spotify Teardown (which I can’t recommend highly enough), “the selling point of Spotify is not necessarily music but music streaming framed as a deeply personal and intimate—even happiness-inducing—practice.”
 
Importantly, just as Spotify does not serve as a one-size-fits-all framework for artists’ business models, the company’s mass-market approach to curation and product bundling is far from a one-size-fits-all solution for maximizing the relationship between music and health.
 
As long as Spotify continues to chase the mainstream as a public company, I think there will be an opportunity for smaller startups to tackle more niche health problems and use cases that the streaming behemoth either is overlooking or has no incentive to build. One such example is Mila, a startup in this year’s Techstars Music cohort that is gamifying and digitizing music therapy for children with neurodevelopmental disorders.
 
 
3. Rethinking the music-video experience with more interactivity.
 
For years, entertainment companies have tried to cement choose-your-own-adventure storytelling into digital culture. Early results were a bit clunky, and fell short of providing genuine added value for fans or businesses. But we now live in a world where streaming platforms themselves have equal (if not bigger) budgets as traditional studios and labels to invest in creative projects—not to mention the technological infrastructure to pull off non-traditional storytelling formats in a more convincing manner.
 
Netflix’s Bandersnatch, with its five endings and trillion possible story combos, sparked my imagination in terms of rethinking the cultural and commercial role of soundtracks in the music business, as well as the rising trend of giving fans a more involved stake in artists’ narratives and creative outcomes.
 
Some up-and-coming startups are trying to build platforms that bring a choose-your-own-adventure level of interactivity and agency to smaller-scale, shorter-form visual content for music. Ekohas been developing interactive music videos since 2010, and currently has a long-term content partnership with Warner.
 
Techstars Music startup Rhinobird is taking a slightly different approach: rather than making a music video’s content interactive in itself, the company is testing a playlist-like format known as a “reel”—allowing users to build their own compilations of YouTube videos around a particular topic, which others can scroll through within the same rectangular frame, in a tabbed format similar to Snap and Instagram Stories.
 
For instance, there is a reel on Rhinobird’s website that compiles multiple videos related to the making of DJ Khaled’s song “I’m The One,” including behind-the-scenes studio footage, media interviews and Snap stories from DJ Khaled himself in addition to the official music video.
 
While these examples present a compelling user experience, I’m still on the fence about whether they can sustain themselves as viable standalone businesses. It’s only a matter of time before a market leader like YouTube, Snap or Instagram either acquires one or these startups or incorporates similar technology into their own video formats. 
 
On the other hand, there are startups like collaborative songwriting platform Hookist (of Project Music) that are giving fans a stake in the outcome of a musical project as it is actually being created in real time, instead of simply scrolling through already-finished products. Incorporating that intimate of a collaboration in visual media may be a more technologically daunting undertaking—but, as the next two sections also suggest, it’s far from unrealistic.
 
 
4. Digital avatars and “virtual influencers.”
 
Younger generations of music and entertainment fans growing up today have no firsthand sense of any historical dichotomy between a “pre-digital” and “post-digital” lifestyle. Yet that same dichotomy permeates so many discussions I’ve heard at conferences and in the press about the future of music and tech.
 
Yes, a knowledge of the industry’s history is essential for ensuring that we don’t repeat our past mistakes with respect to innovation. But some of the most cutting-edge media technologies being developed today render any different between pre- and post-digital consumer behavior irrelevant and/or nonexistent.
 
For the millions of fans playing Fortnite every single day—and who watched Marshmello’s groundbreaking concert within the game—digital relationships and interactions don’t just feel “as real” as the real thing; they are the real thing altogether.
 
Artists and music companies would do well to follow in Marshmello’s footsteps and create experiences for these types of consumers—part of which would involve developing convincing live and meet-and-greet experiences in contained virtual environments.
 
One approach that several celebrities are taking today is to build animated clones of themselves for inclusion in games and social media. As revealed at the end of Netflix documentary The American Meme, Paris Hilton and other celebrities have already digitized their bodies using motion-capture tech for use in social VR platform Staramba. Childish Gambino also launched the first three-dimensional, artist-branded Playmoji that can be incorporated into photos and videos using Google’s Pixel camera (somewhat similar to Snap’s AR lens).
Importantly, these avatars don’t even have to be connected to “real-world,” already-existing human personalities. The enduring popularity of Lil Miquela has ushered in a new kind of “virtual influencer” economy where digital avatars alone—with the help of human software developers and designers on the backend, of course—can monetize their followings and command similar-size brand deals, with much less overhead. Human influencers and celebrities are also understood to have a limited shelf life with respect to their popularity, whereas virtual IP can technically live on in perpetuity.
 
Techstars Music startup Replica Studios is taking the digital-avatar concept one step further by tackling artificial voice synthesis—providing visual artists and filmmakers of all levels with “access to millions of ‘Replica’ voice actors on demand,” without the need to hire their “real-life” human analogs. According to Replica’s website, these synthetic voices can be tailored according to factors such as the customer’s desired emotion, style and language.
 
I think this is an especially smart angle to pursue with the rise of voice-activated speakers, which has already generated demand at all levels of the music industry for more conversational content around artists’ catalogs—and, ironically, has also contributed to growing demand for the much less technologically innovative format of podcasts.
 
 
5. Creative AI and “synthetic media.”
 
In addition to catering to the rising world of digital avatars and influencers, Replica Studios is also implementing a practical use case for what startup studio Betaworks has called “synthetic media,” or algorithmically-generated text, audio, images and video.
 
Perhaps the most talked-about example of synthetic media in the music industry is AI-generated songwriting, the media coverage of which is already getting pretty saturated (yes, I am complicit). That said, momentum isn’t slowing down, and music accelerators can receive at least partial credit for building up the buzz. Techstars Music included two startups from the field (Amper and Popgun) in its inaugural cohortHumtap just graduated from Abbey Road Red and retired Siri co-founder Tom Gruber is now working on his own AI music startup LifeScore with Abbey Road as well. 
 
What may be under-reported is how applying the same algorithmic, generative technology to visual assets will change the music business.
 
For all the political dystopia being waxed around deepfakes, there’s been relatively little public consideration of how the technology could be framed and productized to more positive ends. Techstars Music’s Xpression, which is developing an app that allows users to superimpose their own facial expressions onto anyone’s face through just a selfie camera, is one potential example.
 
Charli XCX and Troye Sivan also partnered with creative studio Pomp&Clout to use deepfakes in the music video for the duo’s single "1999,” to make them look like members of the Spice Girls and the Backstreet Boys or like the cast of The Titanic. Interestingly, the embrace of deepfakes brought the team both aesthetic and practical value, as they had limited time to put the video together.
 
That said, there is valid cause for concern about the destructive effects of synthetic media—particularly along the axis of what Warner Music’s Head of Innovation and SVP of Business Development Jeff Bronikowski recently called “defensibility,” or the ability of a company to defend its technology and customer base against competition from rivals.
 
Speaking at the NY:LON Connect conference in January, Bronikowski said that he “can imagine what we see in AI today is going to be an eighth-grade computer-science project using Google AI Cloud in a few years time.”
Assuming both deepfakes and voice-synthesis technology will be made cheaper and more accessible to the wider public, there will hypothetically come a point at which anyone can create a virtual avatar of their favorite singer from scratch, assign that avatar a soundalike voice based on pre-existing audio recordings of the singer and then make that avatar speak and move however the user chooses, without prior permission.
 
As a journalist, I am experiencing this algorithmic encroachment on my own profession as well: nonprofit research organization OpenAI just published a paper sharing the staggering results of a new, unsupervised language model that can generate coherent, realistic news articles and prose with just a one-sentence prompt. Over the next decade, I think the following question will become more and more important for people to ask themselves, both philosophically and commercially: How do I convince my audience that I’m not a robot? And, perhaps: Will that even matter?

Share on:

1 Comment

  1. Really insightful – thank you
    Don’t forget about all the AI-based music talent-spotting services that are raising big $$$

Comments are closed.