Artificial intelligence is rightly viewed as a potential threat to music. Models are being trained (illegally, in most cases) on copyrighted music created by flesh-and-blood humans without permission or compensation. AI compositions are being passed off as real human composers on streaming music platforms, taking away streams and income from proper artists.
Beyond all the legal issues record labels have against companies producing AI music, they’re also very interested in its cost-saving, career-extending and money-making potential.
Last year, Universal Music released a new Spanish version of Rockin’ Around the Christmas Tree by Brenda Lee, a 1958 hit recorded when she was just 13. Lee is 80 now, so it wasn’t like they could get her back into the studio. Instead, they used AI to create a perfect replica of her singing the song in Spanish. Lee was informed of the project ahead of time. She was “blown away” by the results. It’s eerily accurate.
This is just one example of how AI is being used as a tool in the music industry. Here’s another with even more interesting applications.
Yumi Matsutoya is a 71-year-old artist from Japan who is working with a startup called Dreamtonics on her latest album. The company’s voice synthesis technology has enabled her to duet with her younger self. The software — their Synthesizer V tool — merged her current voice with that of Yumi Arai, her birth name, and how she identified herself between 1972 and 1976. She later became a star throughout the ’80s and ’90s, recording more than 600 songs.
Get breaking National news
For news impacting Canada and around the world, sign up for breaking news alerts delivered directly to you when they happen.
The song is entitled Call Me Back and is being used to celebrate her 50th anniversary in the music business. The video is kinda Blade Runner.
Masataka Matsutoya, who produced the album, says “It’s like recording sound beyond time. Anyone who listens to it can hear the old Yumi, yet the results are strikingly new.”
He then gets to the real point of the exercise: “If it was used normally, people would say that Yumi gave up singing and tried to make do with AI for the easy way. It’s like Yumi finally lost her voice and relied on AI. This ultimate technology has no meaning or significance unless it is used from the beginning of the work creation.”
It’s an interesting take on de-aging. As we grow older, our voices change, grow deeper, rougher and lose some of their previous range. Anyone who has seen Paul McCartney, Elton John or other heritage artists play live over the last 20 years or so will know that the songs might not sound quite right. That’s because they’re often performed in a lower key, making it easier for older artists to hit all the notes.
Ozzy Osbourne is among others who have relied on other methods of augmenting their vocal performances, such as backing tracks and, occasionally, someone behind the curtain hitting the notes Ozzy can’t. And then there are groups whose live performances are little more than pantomiming to recordings.
Real-time voice correction is also something that’s been around for a while. I’ve stood at many a soundboard watching a computer adjust bum notes in milliseconds. The singer hears what’s coming out of their mouths through in-ear monitors while the audience hears the corrected vocals.
Dreamtonics won’t say how its software works, but it has the potential to open all kinds of doors for AI assistance beyond simply the ability for a singer to duet with younger versions of themselves. What about an older artist recording an entire album using their younger voice?
Even better, could the technology be adapted for de-aging voices in real time? Maybe. If so, this tech could extend the live performance careers of artists whose voices have grown croaky with age, abuse and illness. Many have had to retire from the road and forgo revenues from playing live because while the rest of their bodies may be up to the task, their voices just can’t cut it anymore. Given the continuing demand for gigs and tours by legacy artists, there could be a lot of money at stake.
The movie business started using non-makeup de-aging techniques as far back as 2006 when Patrick Stewart was made into a younger Professor X. Tom Hanks got a more advanced treatment in 2024’s Here. William Shatner was made into a young Captain Kirk in a 2024 short entitled 765874-Unification. He’ll turn 94 next month.
But just like autotune and other voice correction tricks, there will be pushback from certain segments of the audience. Some want an authentically human performance, warts and all, and reject such efforts as fakery. Others will embrace it, insistent that when they pay for a concert ticket, they want whatever comes from the stage to sound exactly like the original recordings.
We’re going to see more adaptions of AI when it comes to music, recordings and live performances. We’re only at the very beginning of what will be a major revolution in how music is composed and performed.
© 2025 Global News, a division of Corus Entertainment Inc.