Youth, Artists create Audio Visualizers with OpenAI GPT-4 Turbo

We learned how to use audio visualizers using GPT 4 Turbo (GPT 4.5) and Javascript. It was lots of fun!
We learned how to use audio visualizers using GPT 4 Turbo (GPT 4.5) and Javascript. It was lots of fun!

A Journey into Musical Innovation and Audio-Reactive Art with OpenAI tools!

WINNIPEG, MANITOBA — Ever closed your eyes during a song and felt the music swirling into shapes and colors? We wanted to capture that feeling, that raw, synesthetic leap, and bring it into the tangible realm. Not just a simple visualizer, but a living, breathing response to sound. We wanted to make music visible.

It began with a question: what if we could coax the very essence of a song into a dance of light and form? We weren’t just coding; we were sculpting with sound, trying to translate the emotional weight of a melody into something you could see, feel, almost touch.

Check out a few of our “experiments” here, with Song 1, Song 2 and Song3. You can view the source code for how to make your own.

Sound into Light: Visualizing Music, One Particle at a Time

This winter, our arts mentorship program had a lot of fun learning about different approaches to using artificial intelligence with the arts. From learning about prompt engineering with ChatGPT, to building our very own AI image generators and even building a virtual art gallery, it’s been a blast. But one area we’ve always wanted to do more in, has been music. And we really had fun.

We started with a simple idea: each sound frequency becomes a particle, dancing to its own tune. We learned just enough basic code to guide the AI, and then, honestly, let GPT-4 Turbo do the heavy lifting. It was surprisingly fast, freeing us to focus on the art of it. We spent hours exploring different approaches to making the different visualizations and what they could look like, and it was so much fun!

We wanted to teach young musicians that they could do this too. That the power to create isn’t locked behind years of coding experience and even artists who don’t code can use these kinds of tools.

Instead of wrestling with complex algorithms, we spent most of our time experimenting with color palettes. Deep blues for the bass, like a gentle heartbeat. Greens, flowing like a whispered story under the northern lights. Fiery reds for the high notes, a burst of pure energy. We added ethereal trails, like the lingering echo of a note, and tweaked the particle movements until they felt organic, alive.

Next Steps:

Imagine a workshop for young musicians: they pick a song, learn a tiny bit of code, and then, with AI, transform their music into a vibrant, interactive visual. They learn about sound frequencies, color theory, and the emotional impact of motion—all through the lens of their own creativity.

This wasn’t just about learning to make an audio visualizer; it was about democratizing artistic expression. We proved that with a little guidance and the right tools, anyone — even artists who don’t code, can turn sound into a visual symphony. We focused on the “feel” and the “look” because AI took care of the “how.” And that’s where the magic truly happened.

Check out a few of our first experiments here, with Song 1, Song 2 and Song 3. We’ll be experimenting more with this new knowledge! And even if you aren’t a coder, you can too! Try it out!

This year’s winter programming and arts mentorships are made possible with support from the Manitoba Arts Council and the OpenAI Researcher Access Program. We really thank them for their support!