Wil we all be musical cyborgs one day? In this edition of Projecting Trends, Bas Grasmayer explores how music will likely fit in to a transhumanist future, as technology's development continues to rapidly accelerate, bringing music along with it.
Guest post by Bas Grasmayer on Synchblog
Throughout the last 50 years music has become increasingly personal. It shifted from family piano to bedroom record player, and then from bringing albums in your Walkman to your own personal playlist on your smartphone.
The increased personalization and portability of music has given many people a utilitarian orientation towards music. When we need to focus, we tune into a playlist with light classical music and no vocals. When it’s the end of the week, we listen to more energetic music to get us into the mood for social events.
We have started using music to augment our everyday lives. The convenience and effectiveness of enhancing situations has increased tremendously in the era of smart devices and all-you-can-eat streaming services. Parallels can be found in unexpected places: from personalized drugs, artificial intelligence, and the creation of extra senses through technology.
With technological development's rapid acceleration, it doesn’t seem unlikely that the next step in human evolution will be enabled by tech, rather than aeons of natural selection. Since the invention of the computer, the trend has been to make it smaller and bring it closer to us. Once on our desks, then in our pockets, now as wearables, with the next step being implantables.
Given developments in machine learning and artificial intelligence, we’ll likely see the day where we can create a link between our brains and interconnected supercomputers that each have a greater calculative capacity than all of humanity combined.
There are many things that need to be figured out before then, besides the obvious privacy concerns of being completely linked in.
But this piece will focus on the path there - the developing trend, and music’s place in that development.
Smart drugs, productivity & music
Over the past years, there’s been a spike in media reports about professionals in Silicon Valley using smart drugs called nootropics to enhance their professional performance. These substances, which enhance cognitive functioning, are supposedly safe and non-addictive in order to be labeled nootropics. They let people alter their chemical balance in order to do better at their specific function.
One method of achieving greater focus through music is through so-called binaural beats. They provide listeners with an auditory illusion by playing frequencies to each ear which slightly differ. Because of this slight difference in frequency, the listener will perceive a third tone, the binaural beat, which doesn’t actually exist. This third perceived frequency is designed to match the frequency of brainwaves, and has been shown to reduce patients’ anxiety prior to an operation (PDF).
Increasing music’s personalization through AI
Then there are adaptive music apps that feed back your environment’s sounds into the music, as discussed in a recent Projecting Trends piece.
Through machine learning, these algorithms can get smarter over time. Take Google Music’s recent update for example. Their goal is to deliver the most relevant soundtrack for each moment. They can do this by having a lot of data about their users, and then interpreting that data to make guesses about what a user is up to. The user’s interaction with those playlists (or lack of interaction) also generates data and may show when an assumption was wrong. Now, through machine learning, or what’s sometimes referred to as artificial intelligence, algorithms can learn from this feedback and improve themselves.
This means that over time, Google Music will get better at recommending the right music. And if we’re going into a future where we use music to augment our experiences and have our own personal soundtracks, then these algorithms will get increasingly apt at composing exactly the right soundtracks to boost our performance. In that sense, music may function as a type of precision medicine.
Developing new senses
While we’re born with a certain number of senses, new senses can already be developed. For instance, some people have chosen to get magnetic implants in their fingertips in order to develop more awareness of magnetic fields that can be caused by electricity. “In time, bits of my laptop became familiar as tingles and buzzes.”Colour blind artist Neil Harbisson worked with scientists to develop his own sense: the ability to hear colours. He has an antenna on his head with a camera, connected to a chip which creates sounds based on the camera’s input. While he sees the world in grey tones, he gets the information of colours translated to him through sound frequencies.
Having technology implanted may seem extreme, but it’s more common than you’d think: consider the amount of people with pacemakers or pressure sensors. In one case, researchers found a way to recharge the battery of pressure sensors through the vibrations caused by low frequencies in rap music.
Artists are often among the first to play with new technologies and see what they can do with them. As new technologies are developed, new interfaces are explored, like the Mimu gloves. In designing instruments, one always has to consider the human body and its limits.
This means that as it gradually becomes more normal to integrate technology with our bodies, so will it become more normal to be able to interact with instruments through this embedded technology.
So, will we all be musical cyborgs one day? It’s hard to say what the future will actually look like.
But certain is this: scientists and artists both have an important role to play in shaping it.
Writing process augmented by this Spotify playlist.
Subscribe to our Synchtank Weekly to receive all of our blog posts via email, plus key industry news, and details of our podcast episodes and free webinars.