Topline: Why it matters: Localized content that’s been dubbed into the local language is more popular, it allows for creative work to find a larger audience and for content owners to increase the value of their holdings. For actors, there’s potential to generate revenue from their voice work without even being present. But the unlock that AI tools provide do still come with unresolved legal issues, as well as questions about consent and compensation of the actors whose voices help train the AI speech models.
State of the AI art: The best AI voice dubbing tools can capture a complete range of an actor’s performance with about an hour of training data. This input can be blended with other data to create a wholly synthetic voice, thereby avoiding one actor having their aural “persona” appropriated. Lip-synching tools, which are just now emerging, can affordably match the words with the actor’s mouth movements.
The premier companies in this space, which are working directly with studios, guilds and news organizations, emphasize their focus on always having permission to use a voice or modify lip movements and paying for their efforts. Ethical concerns: Of course, not every company in this burgeoning space is upstanding, and the internet is rife with ways to produce unauthorized dubs of a person’s voice. Even if all the participants in AI voice work are properly compensated, there remain questions about the actors who previously provided localized performances, as we.
