"The wired generation disrupted the old gatekeepers—record labels, radio programmers, physical retailers... generative AI represents is a further, more radical step: the disruption of creation itself."
By Kyle Bylin
1: From Making Music to Imagining Songs
Back in 2010, Chicago Tribune music critic Greg Kot reported in his book Ripped about how the "wired generation" revolutionized the music industry. Nearly sixteen years later, college students do a lot more than download MP3 files and burn them to compact discs. Anyone can now create any song that they can imagine using Suno or Udio, two of the leading GenAI music creation services. They can release their music on streaming services or put it out on TikTok, Instagram Reels, or YouTube Shorts.
A few of these songs have even gone viral and climbed the Billboard charts. Are these songs hits? Will any of these artists be able to tour or make money? The jury is still out. But it signals a shift. In less than two decades, music production and creation have gone from a set of specialized knowledge and skills that allow professionals to make music in recording studios to something anyone can do with a prompt on their phone. Anyone can now imagine any kind or genre of music—and generate it at will.
In the world of images, video, and podcasts, a similar trend follows. Anyone can use ChatGPT, Gemini, or Grok—the leading large language models—to generate an infinite amount of text and images. With Google's latest Gemini 2.5 Nano Banna model, anyone can generate realistic images or edit existing ones. It's a lot like having an Adobe Photoshop agent that can perform any task that you can type or verbalize.
I have a colleague who is an art professor; he says that he created thousands of images over several months. It got to the point where the contents of his imagination were exhausted. Similar things have happened with videos and podcasts. Using a tool like Midjourney, DALL·E, or Adobe Firefly, anyone can now "generate" videos of any place, time, or person they can imagine. Similarly, Google's Notebook LM can produce a podcast that sounds like two radio hosts discussing a library of research.
What Kot described in Ripped was not just a change in distribution, but a redistribution of power. The wired generation disrupted the old gatekeepers—record labels, radio programmers, physical retailers—by making music portable, replicable, and shareable at near-zero cost. What generative AI represents is a further, more radical step: the disruption of creation itself. The tools now intervene not merely after music is made, but at the very moment of imagination, composition, and execution.
This raises questions that are easy to overlook amid the novelty: If anyone can make anything instantly, what does it mean for something to matter? How does the power of human creativity and imagination factor into the intelligence age? Are these AI tools helping everyday people create "AI slop" or something that we've yet to understand or define fully? What new capabilities might this age of imagination unlock?
2: The Case for Artificial General Imagination
Since experiencing the latest wave of generative AI, I've been thinking about a concept called "artificial general imagination," or what could be called AGI. It's a different way of thinking about intelligence — the word "I" normally stands for.
What would it mean to declare that kind of AGI has been achieved? When will it be time? I'm asking these questions because a handful of CEOs and one major science publication have recently declared AGI. NVIDIA's CEO Jenson Huang declared that AGI has been reached in a recent interview, and the venture capitalist Marc Andreessen of a16z echoed the same claim on X, saying, "I'm calling it. AGI is already here — it's just not evenly distributed yet." You might retort that this is just CEOs saying things, but these claims came a few months after Nature, a leading international scientific journal, published a paper strongly arguing that AGI is here.
Of course, some detractors have pushed back against the claims, such as Gary Marcus, a cognitive scientist, author, and AI researcher, who is famous for pushing back against CEO-driven AI hype on X and in his Substack newsletter.
As such, I started to wonder what AGI might look like and feel like. At what point could you argue that everyone has access to AGI, or what's commonly called artificial general intelligence? AGI is widely believed to be a next-generation, powerful form of generative AI that's either right around the corner, decades away, or science fiction. This powerful AI will be either humanity's greatest invention — if you believe the "optimists" and "liars" like OpenAI CEO Sam Altman and former Google CEO Eric Schmidt — or mankind's quick demise if you believe the so-called "Doomers."
Over the Easter weekend, my dad and I saw a brand-new documentary called "The AI Doc: Or How I Became an Apocaloptimist," which featured Altman and also starred a wide cast of some of the most famous optimists and doomers. The film, which aired in a limited set of movie theaters, became known to me for ping-ponging between doomer and optimist scenarios and ultimately landing on a positive note. But the optimistic ending left me wanting more because it didn't show viewers the real, tangible progress being made at frontier labs and technology companies. It didn't show how GenAI is already empowering everyday people to do unimaginable things.
Admittedly, I have a hard time with the larger AGI conversation right now. It's because we're mostly trying to measure AGI by so-called "model intelligence" and "task completion." I have a larger hypothesis about imagination, agency, and taste execution that's more aligned with how Silicon Valley thinks about AGI. If you read the latest pieces in the New Yorker and New York Times, you'll be familiar with the fact that "agency" and "taste" are now the most hyped words in the mouths of AI execs.
If you time-traveled back to 2002 (twenty-five years ago, when I'd just be arriving in high school) and gave a person with taste and agency the AI services and features that exist now, they'd be unstoppable. Imagine giving a 25-year-old design student the image model Nano Banna and a modern MacBook Pro; they could generate any image and edit any photo. They'd be able to edit reality in a way that'd be like sorcery or magic. If you also gave that person ChatGPT Pro, the $200 tier of the model, they'd be the smartest person in any room — maybe even the world — and if you added on Claude Code or Loveable, two leading AI-coding tools, they'd also be able to write any website on the web, potentially making themselves into millionaires.
This person would still need domain knowledge and skills to be successful in any endeavor. Scarily, they'd also be able to hack any organization in the world, as their AI model would be the world's best hacker and security system breaker, since it already has that capability in modern times with the arrival of Anthropic's "Mythos" model. Reportedly, during model testing, Anthropic found that "Mythos Preview" can identify and then exploit "zero-day vulnerabilities in every major operating system and every major web browser when directed by a user to do so." If you're unfamiliar with the terminology, that means undiscovered bugs in all kinds of software grant a competent hacker God-mode access to your server.
In other words, superintelligent AI has also arrived in cybersecurity. It's already here. So, what about imagination? Where does humanity sit on this threshold?
3: The "Big Bang" of GenAI Creation
Research librarians are trained to answer patron questions that boil down to this: "What is currently known about this subject, and how do I get to it?" The goal is to help faculty and students find relevant, peer-reviewed articles for their research papers in the stacks and databases. One of the main challenges with GenAI is that it has been improving so rapidly that even books published two years ago feel outdated.
The same could be said about research papers, which often trail behind the latest model developments. As a result, the larger conversation unfolds on podcasts and YouTube channels. It dominates media headlines and the X social media chatter.
As such, it can be challenging to contextualize what exactly to make of this situation and "AI hype," or to situate the generative AI revolution historically. The streaming revolution began when Rhapsody launched the first on-demand music service in 2001 and gained further momentum with the public launch of Spotify in 2008. Three years later, Apple released the iPhone and revolutionized personal computing. Apps became the new container for software on the mobile device. Music could now be played and identified anywhere, at any time, and personalized to your taste and mood. Intelligence and imagination can now be accessed like on-demand songs.
Nearly fifteen years after Spotify's launch in the United States in 2011, music critic Liz Pelly published a highly critical book called Mood Machine: The Rise of Spotify and the Costs of the Perfect Playlist. It took that long to fully understand the impact of the streaming service on the music industry and society. Hundreds of books have already been published in the four years since OpenAI's ChatGPT chatbot exploded in online popularity in November 2022. It reached an estimated 100 million monthly users by January 2023—one of the fastest growth trajectories in history—making GenAI creation feel mass-market rather than experimental R&D. Since then, ChatGPT has grown to 900 million weekly active users. Each day, it processes over two billion questions.
By late 2025, Suno users were generating approximately 7 million AI‑created songs per day, a pace that surpasses Spotify's entire music catalog—nearly the entire history of recorded music—every two weeks. Industry reports suggest the platform now supports tens of millions of active users globally, with roughly 2 million paying subscribers and more than 100 million people who have used the service at least once since launch. This musical surge has produced an unprecedented abundance of songs.
If you could ask the mathematician and codebreaker Alan Turing, who first asked the question "Can machines think?" today, he might say that a kind of musical Turing Test has already been passed. Less than a quarter century—just twenty-two years—after GarageBand made it possible for independent artists to record music from home, AI‑generated songs have become effectively indistinguishable from those made by humans. A recent study by the streaming platform Deezer found that 97 percent of listeners cannot easily identify AI‑generated songs from traditional music.
A similar dynamic is already visible beyond the creative arts. In a recent New York Times news report on what it called "The Big Bang: A.I. Has Created a Code Overload," one financial services firm working with the cybersecurity startup saw its team's output jump from roughly 25,000 lines of code per month to 250,000, almost overnight. The result was not effortless productivity, but a new bottleneck: a backlog of more than one million lines of code waiting to be reviewed. The constraint had not disappeared; it had simply moved. Using AI chatbots to write code was no longer the hard part. Deciding what was correct, safe, and worth keeping became the challenge.
If the Information Age was about access to information, the Imagination Age is about what we do with it. When taken together, these shifts demand a rethinking of what education, creativity, and AI literacy are meant to cultivate—and they place university libraries at the center of that work. In this abundant era, GenAI has made production effortless and imagination seemingly infinite. As such, the most urgent literacies aren't access or technical skills, but about taste, judgment, and agency. Research librarians have long helped their patrons navigate abundance by teaching them how to evaluate sources, situate knowledge, and participate in cultural conversations. AI literacy simply extends this mission from texts and databases to models, platforms, and imagination.
4: Artificial General Imagination in Action
Clay Shirky argued in his 2010 book Cognitive Surplus that the internet unlocked a vast cognitive surplus—millions of hours of free time and creative energy that could finally be coordinated outside of institutions. What generative AI reveals is something adjacent but more radical: an imaginative surplus. Tools like Suno and Midjourney do not merely publish and distribute what people have already made; they generate songs and videos when imagination strikes, which collapses the distance between envisioning and executing. The result is not just more participation, but a world in which creation is no longer scarce. The open-ended question is no longer why people create, but how meaning, taste, agency, and significance emerge when making something is effortless.
Every day, people can now create anything they can imagine. Intelligence is what allows this to happen. But it's not the most significant part. The imagined outcome of a GenAI project—the Midjourney music video, Suno song, or Lovable website—is paired with the intelligence of a personal teaching coach. You can use ChatGPT to teach yourself audio-engineering tools and use Midjourney's video service to create a psychedelic music video. You can use Suno to create any genre of music that you can imagine. This has started to flood TikTok with all kinds of different music videos.
Steve Jobs once called the computer a "bicycle for the mind." Still, tools like Suno feel more like a recording studio for the imagination—systems that do not just help you think faster, but help you turn feeling, taste, and vision into finished songs. I believe that music experience design is now more like IDEO design thinking. A whole new way of prototyping and testing the market for song hit velocity. Video experience design is one person hosting, generating, and editing entire segments just by talking to an agent. Loveable, Claude Code, and Base 44 have turned user experience design into something that is now more like imagination engineering. Using your voice to direct AI-coding tools and agents is wildly different than typing text into the command line.
It's a new world now for faculty and students. The challenge is that we're still talking about artificial general intelligence and not artificial general imagination. The network of GenAI tools and services still "feels like" AGI. It's still superhero-level powers for one person when you look at the twenty-five-year timeline. Think about how big a difference one person or a small team could have imagined and created in the past ten years. Millions of lines of code were written in three to four months.
The rub is the attention economy and algorithm feed. Whatever you imagine may not resonate with an audience or the algorithm. But let's talk about people who put in their ten thousand hours of deliberate practice. They create whatever they can imagine. What does that progress look like in four years? How does that dovetail with the significant progress that will keep happening in frontier AI? That's why I believe it's important to discuss and understand both sides of the AGI argument and metrics.
Through GenAI services, we've officially reached a world where anyone can create anything they can imagine. Perhaps the intelligence age will come to be better understood as the imagination age, because it empowers new builders and creators.