Music, AI & the Metaverse ‘offer limitless possibilities’
The potential for AI to disrupt music is the topic de jour, but add the Metaverse to the mix, and the possibilities become even more mind-boggling.
Guest post by Eric Alexander of Soundscape VR
We’ve all dreamt at times of music collaborations that never were.
- What if Jimi Hendrix played a show with Pink Floyd alongside David Gilmour?
- What if Madonna and Michael Jackson teamed up to create the ultimate pop fusion EP?
- What if Skrillex produced a Garth Brooks track featuring a guitar-shredding Santana solo?
Over the last year, many have seen the power of text-to-image AI, technologies able to fuse words or concepts together into artistic masterpieces of gallery quality. But how long will it be until text-to-music AI exists, where you can type in your favorite artists, and within seconds it spits out not just a brand-new track but an entire album produced to your liking? Daft Punk remixing Styx’s Mr. Roboto might sound like science fiction, but it’s coming sooner than you think.
AI offers limitless possibilities to reshape the world as we know it. It’s ushering us into the future, much like the internet sculpted the dawn of the 21st century. Here at Groove Science Studios, we’ve been creating the future of music for nearly a decade. Our Musical Metaverse, Soundscape, is about doing the impossible when it comes to experiencing live music. We are working on all kinds of technologies to power the next generation of concerts, and AI is at the forefront of them all.
In 2017 we started work on Sonic AI, an advanced music AI which serves as the foundation of the Soundscape Musical Metaverse. Sonic AI is the analysis engine that drives Soundscape’s audio reactivity and enables users to play any music of their choosing across any music service provider, synchronizing the lights and visual effects to the audio in real time. As it processes this data, Sonic AI is in a constant stage of modulation, riding the waves of the song as it adjusts hundreds of variables automatically on every output frame. Using the data from its four senses, Cerebra, Pulsa, Spectra, and Rhythma; Sonic AI can listen to and analyze different parts of the song, building a complex data model profiling its spectral, harmonic, and melodic information. It’s an entire production crew in one algorithm offering infinite flexibility & customization of the audiovisual experience.
What does this all really mean in practice though?
The outcome is that Sonic AI allows every user to realize their most ideal listening experience in Soundscape. At a virtual show and not feeling the opening act your friend talked you into checking out? No worries, you can listen to whatever you want to hear, at any moment. It’s like being at the main stage of Coachella and being able to swap out a Frank Ocean trainwreck for a beautiful Beyonce track. It’s hopping into the sound booth and adjusting the sound & visual production to your personal liking.
Live music demands the performer, and that’s why so much of our development time at Soundscape has been dedicated to our Soundskin avatars, hyper-realistic customizable full body avatars you can step into within VR. It’s one thing to build a realistic looking human performer, it’s another to bring them to life in a natural way through animation. AI offers incredible possibilities here as well, with the ability to train on recorded artist movement data, to autonomously drive that avatar during performances. In the metaverse concerts of the future, determining who is a real person and who is an AI controlled avatar may be a challenging thing.
We believe AI & the Metaverse offers the s limitless possibilities to reshape the world as we know it . Just like the advancement of speaker technology and Ableton powered the explosion of live music festival performances with its innovative sampling and production tools, AI offers limitless potential to make its own mark by offering a whole new form of musical creativity. Those who make use of these tools and develop the best workflows will thrive in the AI world as the next generation of headliners.
The day is coming when you hear a new song and can’t tell if it was produced by a human or a protocol. Or you will know because the Beatles haven’t released any new music in 53 years, but you’re sitting at home listening to a playlist of hundreds of AI John Lennon masterpieces fresh to your ears. The future is now.
Eric Alexander is the metaverse innovator and creator of Soundscape VR, the world’s longest-running VR music destination. Alexander has been exploring the intersection of art and technology for over 25 years, and in 2014 his passion for audiovisual arts led to the inception of his most ambitious project yet.