AI Models That Can Recognize & Produce Speech for More Than 1000 Languages

Category Machine Learning

tldr #

Meta has built AI models that can recognize and produce speech for more than 1,000 languages—a tenfold increase on what’s currently available. It’s a significant step toward preserving languages that are at risk of disappearing, the company says. The models have been released to the public and compared with models from rival companies, with the Meta models found to have half the error rate while covering 11 times more languages. However, the use of religious texts to train AI models could be controversial due to the bias it might introduce.


content #

Meta has built AI models that can recognize and produce speech for more than 1,000 languages—a tenfold increase on what’s currently available. It’s a significant step toward preserving languages that are at risk of disappearing, the company says.

Meta is releasing its models to the public via the code hosting service GitHub. It claims that making them open source will help developers working in different languages to build new speech applications—like messaging services that understand everyone, or virtual-reality systems that can be used in any language.

The AI models can produce a conversation in over 1,000 languages and recognize more than 4,000

There are around 7,000 languages in the world, but existing speech recognition models cover only about 100 of them comprehensively. This is because these kinds of models tend to require huge amounts of labeled training data, which is available for only a small number of languages, including English, Spanish, and Chinese.

Meta researchers got around this problem by retraining an existing AI model developed by the company in 2020 that is able to learn speech patterns from audio without requiring large amounts of labeled data, such as transcripts.They trained it on two new data sets: one that contains audio recordings of the New Testament Bible and its corresponding text taken from the internet in 1,107 languages, and another containing unlabeled New Testament audio recordings in 3,809 languages. The team processed the speech audio and the text data to improve its quality before running an algorithm designed to align audio recordings with accompanying text. They then repeated this process with a second algorithm trained on the newly aligned data. With this method, the researchers were able to teach the algorithm to learn a new language more easily, even without the accompanying text.

Meta released its models to the public via the code hosting service GitHub

"We can use what that model learned to then quickly build speech systems with very, very little data," says Michael Auli, a research scientist at Meta who worked on the project."For English, we have lots and lots of good data sets, and we have that for a few more languages, but we just don’t have that for languages that are spoken by, say, 1,000 people." .

The researchers say their models can converse in over 1,000 languages but recognize more than 4,000.

The researchers compared the models with those from rival companies and found theirs to have half the error rate and cover 11 times more languages

They compared the models with those from rival companies, including OpenAI Whisper, and claim theirs had half the error rate, despite covering 11 times more languages.However, the team warns the model is still at risk of mistranscribing certain words or phrases, which could result in inaccurate or potentially offensive labels. They also acknowledge that their speech recognition models yieldedmore biased words than other models, albeit only 0.7% more.

The model is still at risk of mistranscribing certain words or phrases, which could lead to inaccurate or offensive labels

While the scope of the research is impressive, the use of religious texts to train AI models can be controversial, says Chris Emezue, a researcher at Masakhane, an organization working on natural-language processing for African languages, who was not involved in the project.

"The Bible has a lot of bias and misrepresentations," he says.


hashtags #
worddensity #

Share