How I used AI to transform myself from a female dance artist to an all-male post-punk band – and what that means for other musicians
When you click on the Spotify profile of Intelligent Band Machine you will see an image of three young men staring moodily back into the camera. Their profile confirms that they are a “British band”, “influenced by the post-punk scene” and trying to capture the spirit of bands like The Cure “while carving out their own unique sound”. When you listen to their music you might be reminded of Joy Division’s Ian Curtis.
If you dig a little deeper and read about them on their record label’s page you will find that Cameron is the lead singer and his musical tastes were shaped by the concerts he attended at Nottingham’s Rock City nightclub. Tyler, the drummer, was indeed inspired by The Cure, as well as U2, and The Smiths, while guitarist, Antonio, blends his Italian mother’s love of classic Italian folk songs with his British father’s passion for The Beatles and The Rolling Stones.
What these profiles don’t say is that Intelligent Band Machine is not real, at least not in the human sense. And I should know, because I created them.
I used a range of Generative Artificial Intelligence (GenAI) tools, as well as my skills as a professional songwriter and sound engineer to make their debut album, Welcome to NTU, and I released it on my dedicated AI record label, XRMeta Records in May 2025.
You might ask why an independently releasing singer-songwriter and music producer like me would create an artificial band. As well as being a musician, I’m an academic with a background in computer science, carrying out research about how GenAI can be used for music.
I had reservations about these tools and how they might affect me as a musician. I had heard about various AI controversies like “fake” Drake, and artists like Grimes embracing GenAI in 2023. So, I was also intrigued by the possibilities.
Over 100 million people have tried Suno, an AI music generation platform that can create songs with vocals and instrumentation from simple text prompts. More than 100 million tracks have been created using the Mubert API, which allows streaming to platforms like YouTube, TikTok, Twitch and Instagram; and according to Deezer 28% of released music is fully AI-generated.
It was time for me to investigate what these tools could do. This is the story of how I experimented with GenAI and was transformed from a dance artist to a post-punk soft rock band.
In my early days of songwriting one of the first pieces of equipment I bought was a Panasonic RQ-2745, a small slim portable cassette tape recorder that allowed me to record rough drafts of vocals on an audio cassette tape.
When cheap products like the Sony cfs-w30 boombox began to incorporate double cassette decks, I could overdub songs and add choruses or instruments like flute or guitar at home. If I wanted a quality recording, I had to book a recording studio. I became an expert at splicing tape to remove vocal parts from the tape recording or to fix tape jams.
Cutting and taping, became cutting and pasting as I experimented with the very early free digital music sequencers that were included on a disk I found on the cover of a PC magazine. I felt liberated when sequencers like Cubase, Pro Tools, and Logic allowed high quality recordings to be produced at home. This, along with the significant reduction in the cost of studio equipment, led to the emergence of the bedroom producer and the proliferation of the 808 sound. This deep, booming, bassline can be heard in hits like It’s Tricky by RUN DMC, Emergency Room by Rihanna, and Drunk in Love by Beyoncé.
Digital distribution and social media then paved the way for self-releasing independent artists like me to communicate directly with fans, sell music, and bypass record labels.
Yet during all of these changes musicians still needed the skills and knowledge to create their songs. Like many musicians I honed my skills over several years, learning to play the guitar, flute and piano, and developing sound engineering skills. Even when AI powered tools began to be incorporated into digital audio workstations, a musician’s skill and knowledge was still needed to use these tools effectively.
Being able to create music from text prompts changed this.
Not since the introduction of music streaming services in the late 1990s has there been such a dramatic shift in music composition and listening technologies. Now non-musicians can create studio quality music in minutes without the extensive training that I had, and without having to buy instruments or studio equipment.
Now anyone can do this. It was time for me to learn what these tools could do.
I typically produce RnB/neo soul, nu-jazz and dance music, although I can write songs for multiple genres of music. For the experiment, I wanted to try a genre that I do not usually produce music for.
The Insights section is committed to high-quality longform journalism. Our editors work with academics from many different backgrounds who are tackling a wide range of societal and scientific challenges.
I tested about 60 different GenAI tools and platforms. These included standalone tools that focus on one task, like MIDI generation (musical........

Toi Staff
Sabine Sterk
Gideon Levy
Mark Travers Ph.d
Waka Ikeda
Tarik Cyril Amar
Grant Arthur Gochin