D.I.Y.

Exploring AI in Your Music Videos: Resistance is Futile [Misha Penton]

[EXCLUSIVE] Independent singer, composer, writer, and filmmaker Misha Penton shares the potential and magic of embracing AI when making a music video and shares her own stunning results.

by Misha Penton, singer, composer, writer, and filmmaker

Music Video Magic

Creating music videos is a big part of how I share my music, whether as full-length song videos or micro-works for social media. I love that more musicians are using imagery in their music. When it comes to music video making, AI offers a kind of magic most artists have only dreamt about.

AI Hysteria

AI is a tool and an art form like any other, and musicians have a choice to reject or harness the technology. Synthesizer precursors date back to the early 20th century and, by the midcentury, were likely thought to herald the end of “real” music. The end is nigh!

I’m a singer and my goodness – have you heard some of the vocal samples from Spitfire alone? And the capability of harmonizers and autotune? Good grief. And yet, I record all my vocal tracks “by hand”, without harmonizers and without autotune. That’s part of my philosophy of creating what I think of as finely crafted art music. But I record my voice and any analog instruments in Logic Pro and my music is replete with digital synthesizers and sound design. I choose how to use the technology.

The Opportunity of AI

In my new music video, “Earthshine”, I’ll show you how I used AI-generated clips as a filter on my original video. I filmed “Earthshine” myself in my living room with one light and against a silk backdrop (zany Misha Fact: I hand-dyed 40 yards of silk in my kitchen sink). I used my Canon EOS 5D Mark III, but I’ve shot a lot with my iPhone. So, please, please, please use your magic smartphones to make videos—they’re amazing.

This first image is a still from “Earthshine” with no color grading, no effects, nothing. It’s straight out of the camera.

This next still is the same image but with effects I created in Final Cut Pro X.

This final image below is what the film looks like after I ran my clip through Kaiber AI and created a composite with my original media. By composite, I mean I layered the AI-enhanced clip over my original and altered its opacity until I achieved a subtle animated effect.

Conceptually, the final video imagery creates a “me that is not me” as the real and enhanced versions of me fade in and out. You see my movements mirrored in the tight and wide shots and if you watch closely, you’ll see how the animated AI-Generated Me merges with and transforms the Real Me: I’m morphing in and out of versions of myself.

Communicating with The Machine

Here’s the thing—it is not easy to get these darn AI apps to do what you want, so the AI hysteria is unwarranted if you ask me. Natural language processing, coupled with machine learning like Kaiber AI, is not super-smart yet. For those of you who know what the singularity is, you can relax, it’s not happening—after all, what could go wrong?

I wanted Kaiber to animate my original video clips. I was able to do that by uploading my clips and being less descriptive in the prompt field, using phrases like “woman moving.” I also used the built-in watercolor effect, and I set the “evolve” (wildness) slider to 0. In the process, there was a lot of time-consuming trial and error, long online render, and upscale times, and it’s not free, mind you. Even though there is a preview function, I tested shorter clips first (cheaper!) to get the desired effect before sending longer ones through The Machine. After all this work, I see that Kaiber recently launched a “transform an existing video” function, which is what I wanted to do, to begin with—the tech is changing lightning-fast.

There are a number of AI apps, and they do lots of other things, including Kaiber’s very cool feature of sending your music alone through the app and letting it generate an entire music video. I experimented with that, and the results were both outstandingly amazing and stupendously disappointing but good God, man—the potential!

Make Great Music

When asked about AI replacing musicians, Warren Huart and other artists have said something like, “Be more creative.” I couldn’t agree more. Stop making generic music that is easily duplicated by a machine. But that’s a post for another time.

If we learn to use AI skillfully, as its capabilities grow, we’ll have a voice in its direction and implementation. We have access to more tech in our smartphones than the Beatles, Stanley Kubrick, and the early filmmaker genius George Méliès combined. Can we use that power for good? Can we use it creatively? Meaningfully? With great intent and artistry? I don’t know about you, but imma give it all I got.

Check out my new music video, “Earthshine” with some beautiful AI enhancements.

Misha Penton is a singer, composer, writer, and filmmaker. Her projects blossom in many forms: live performances, audio projects, video works, site-specific installations, visual artworks, lyrics, and writings.

Share on: