Let’s talk about A.I, baby!
By Sandra Ngunjiri
In 2026, running to check the credits on a newly discovered song to ensure that it is not only written, but also composed and being performed and promoted by an *actual* human being has become an active part of my listener and streaming experience. I think that it is safe to say that fellow music purists, recording artists, and album-streaming enthusiasts around the globe aren’t exactly beaming at the emergence of AI-generated music either. A lot of us have yet to confidently be able to articulate what this truly means for us on a personal and collective basis going forward. For this think-piece to be as informative as possible, we must dissect and understand the origins of machine-prompted assistance within the world of music as a whole.
In 1957, a music score titled “The Illiac Suite” became the first-ever to be composed by a computer. Received with a hearty mix of skepticism and curiosity by music consumers of the time, the string quartet piece by Lejaren Hiller and Leonard Isaacson was both a novel and provocative offering indeed. Noted and described by critics and fans alike as “beautiful” but equally “unsettling”, the piece premiered to an audience that highlighted it as a trend-setting innovation due to the 50s being a time of rapid technological advancement. However, the general aftermath was that this was a cold, mathematical, and mechanical take on an art form that thrives on being known for its notable and audible human touch as well as its personal and communal ingenuity during live studio sessions.
In the 90s, the highly respected and renowned English singer/songwriter David Bowie, alongside Ty Roberts, developed the “Verbasizer”. A piece of software that shuffled the lyrics of songs around to create new sonic pathways for Bowie. Lyrics were uploaded into columns on the production software and randomly reorganized. Generating something unlike anything anyone had ever heard before. Bowie enlisted the use of this technology for his 1995 album titled “Outside”. The following year, at the 1996 BRIT Awards, “Outside” was recognized with an award in the “Outstanding Contribution to Music” category. The 20th studio album by Bowie was regarded as an “experimental success.”
Fast-forward to modern day, and generative music models like ‘Suno’ and ‘Udio’ have been collectively used by over 105 million people since their respective launches. Based on these figures, by August 2026, a projected 2.4 million users will log in daily and either create new music or remix existing songs. ‘Suno’ currently has a 67% market share, with ‘Udio’ being a close second, capturing a meaty 28% market share in the AI-generated music space. Paying subscribers to these services have increased significantly in the last year, and web traffic continues to grow faster than any negative TikTok reviews and podcast discussions on the topic may lead us to believe.
The most notable and debated mainstream use of AI in music occurred during the Drake and Kendrick Lamar feud, with Drake making use of AI-generated verses that mimicked the voices of Tupac Shakur and Snoop Dogg to taunt his Los Angeles native opponent, Kendrick Lamar, on the rebuttal song “Taylor Made Freestyle”. Facing legal action from Shakur’s estate, Drake was pushed to scrub the song from streaming platforms, but a glaring legal gap within the industry was identified: Can the vocal likeness of an artist be considered copyright?
Hall of famer RNB artist Kehlani has been vehemently against AI artists like Xania Monet, who recently signed a “multi-million-dollar record deal”. Rightfully stating that this “erases songwriters, producers and our humanity.” Some recording artists have decided to go the “permission over prohibition” route, allowing AI versions of their art to exist, as long as they, too, can profit from it and pre-approve the use of their voice and its likeness on musical projects.
The protests, questions, and ethics around vocal cloning, potential human erasure, rightful ownership, and legality persist. Since AI can make music, who owns the data that makes these kinds of sonic offerings possible? Are we willing to exchange human creativity with digital convenience or meet somewhere in the middle? Do algorithms get to decide what art is now or is the human mind where the buck stops? We will have to wait, listen, and see.
At Peech Consulting, we are dedicated to empowering creators in an evolving industry. If you are a songwriter or author looking to strengthen the protections around your lyrical intellectual property, contact us at hey@peechconsulting.com. Let’s secure your legacy together.
Take A Bite Out Of The Machine.
This playlist traces the line between human creativity and technology. It shows how one 6-second break built entire genres, long before algorithms entered the studio.Now, as AI reshapes music creation, this playlist asks a clear question.
What happens when the machine stops sampling us and starts replacing us?
References:
Suno Revenue and Usage Statistics (2026)
Suno in 2026: Usage, Revenue, Valuation & Growth Statistics
David Bowie and the “Verbasizer”
From Infringement to Innovation: How UMG’s Udio Settlement Reframes Fair Use and AI in Music
Estate of Tupac Shakur threatens legal action against Drake over AI diss track
Kehlani Speaks Out on Rise of AI Artists: “It’s Erasing Songwriters, Producers, and Our Humanity”
