Meta's New AI-Powered Translator Can Interpret A Spoken-Only Language In Real Time

Meta's New AI-Powered Translator Can Interpret A Spoken-Only Language In Real Time

Meta Platforms Inc. META has built the first AI speech-to-speech translation system, focusing on unwritten languages. 

What Happened: On Wednesday, Meta announced the first AI-powered speech-to-speech translation system for Hokkein, a primarily oral language widely spoken within the Chinese diaspora. The language still lacks a standard written form. 

Meta stated in a blog post that about 3,500 languages, which are primarily spoken, don't have a widely used writing system. Considering speech translation systems depend on transcriptions, and primary oral languages don't have standard written forms, the conventional approach doesn't work. 

See Also: How To Buy Meta (Formerly Facebook) Stock

For Hokkien, Meta developed various methods, including "using speech-to-unit translation to translate input speech to a sequence of acoustic sounds, and generated waveforms from them or rely on text from a related language, in this case Mandarin," according to the blog post.

Why It's Important: The translation system is part of Meta’s efforts to develop AI to help eliminate language barriers in the metaverse. The company hopes this will eventually allow people to talk to almost anyone with the help of real-time speech translations, across many languages.

Meta isn't the first company to leverage AI to curb language barriers and cultural divides.

Speechmatics, a speech recognition startup, landed $62 million in a Series B funding round in July. The startup is on an ambitious path to leapfrog Alphabet Inc.'s GOOG GOOGL Google and Apple Inc. AAPL. It recorded an overall accuracy of 82.8% for African-American voices compared to Google (68.6%) and Inc. AMZN (68.6), reported ZDNet. 

Read Next: Mark Zuckerberg Aims At Apple's iMessage 'Bubble' As He Talks Up WhatsApp's End-To-End Encryption

Posted In: artificial intelligenceConsumer TechtranslationNewsTechMedia