A new AI voice tool is already being used to create celebrity audio clips

Estimated read time: 2 min

A few days ago, voice AI startup ElevenLabs launched a beta version of its platform that gives users the power to create entirely new synthetic voices for text-to-speech audio or clone someone’s voice. ‘a. Well, it only took a few days for the internet to start using the latter for nefarious purposes. The company has revealed on Twitter that he sees a “growing number of cases of voice cloning abuse” and that he is thinking of a way to solve the problem by “implementing additional safeguards”.

Although ElevenLabs did not specify what it meant by “misuse cases”, Motherboard found 4chan posts with clips containing generated voices that sound like celebrities reading or saying something questionable. One clip, for example, reportedly featured a voice that sounded like Emma Watson reading part of Mein Kampf. Users have also posted voice clips that feature homophobic, transphobic, violent, and racist sentiments. It’s not entirely clear if all of the clips used ElevenLab’s technology, but a post with a large collection of voice files on 4chan included a link to the startup’s platform.

Perhaps this emergence of “deepfake” audio clips should come as no surprise, as a few years ago we saw a similar phenomenon occur. Advances in AI and machine learning have led to an increase in deepfake videos, especially deepfake pornography, in which existing pornographic material is altered to use the faces of celebrities. And, yes, people have used Emma Watson’s face for some of these videos.

ElevenLabs is currently collecting feedback on how to prevent users from abusing its technology. For now, his current ideas include adding multiple layers to his account verification to enable voice cloning, such as requiring users to enter payment information or an ID. He also plans to ask users to verify the copyright ownership of the voice they want to clone, such as asking them to submit a sample with guest text. Finally, the company plans to completely abandon its Voice Lab tool and require users to submit voice cloning requests that it must verify manually.

All products recommended by Engadget are selected by our editorial team, independent of our parent company. Some of our stories include affiliate links. If you purchase something through one of these links, we may earn an affiliate commission. All prices correct at time of publication.

Post a Comment

Cookie Consent
We serve cookies on this site to analyze traffic, remember your preferences, and optimize your experience.
Oops!
It seems there is something wrong with your internet connection. Please connect to the internet and start browsing again.
AdBlock Detected!
We have detected that you are using adblocking plugin in your browser.
The revenue we earn by the advertisements is used to manage this website, we request you to whitelist our website in your adblocking plugin.
Site is Blocked
Sorry! This site is not available in your country.