Connect with us

Company News

Meta’s Zuckerberg unveils AI projects aimed at building metaverse future

Meta announced on February 23 plans to build an artificial intelligence (AI)-powered translation system that works for everyone in the world, including an ambitious universal speech translator, as part of its efforts to build the metaverse.

Meta cofounder Mark Zuckerberg expects the metaverse, which is essentially a more immersive version of the internet as we know it today, to be the successor of the mobile internet.

“The kinds of experiences that you will have in the metaverse are beyond what is possible today. Instead of just looking at something on a screen, you’re going to actually feel like you are present with another person. That will require advances across a whole range of areas, from new hardware devices to software for building and exploring worlds. The key to unlocking a lot of these advances is AI,” Zuckerberg said at a company event on February 23.

He said that the technology that enables more people to access the internet in their language is going to be especially important when they “begin teleporting across virtual worlds and experiencing things with people from different backgrounds”.

Zuckerberg said the goal with universal speech translator is an instantaneous speech-to-speech translation across all languages, including those that are mostly spoken and don’t have a standard writing system. “The ability to communicate with anyone in any language — that’s a superpower people have dreamed of forever – and AI is going to deliver that within our lifetimes” he said.

The company is also building a new AI model called ‘No Language Left Behind’ that can learn new languages with lesser training data than existing models and use it to enable expert-quality translations in hundreds of languages, ranging from Asturian (a language spoken in northwestern Spain) to Luganda (a language spoken in Uganda) to Urdu.

“Five years ago, we could translate across a dozen languages. Three years ago, we were up to 30 languages and this year, we are now aiming for hundreds of languages,” Zuckerberg said at the event.

During the event, Zuckerberg said they are also working on AI research to create a new generation of smart assistants that will help people navigate virtual worlds as well as the physical world with augmented reality. “Since these worlds will be dynamic, and always changing, AI is going to need to be able to understand context, and learn in the way that humans do,” he said.

“When we have glasses on our faces, that will be the first time that an AI system will be able to really see the world from our perspective – see what we see, hear what we hear, and more,” Zuckerberg added. He expects this work will pave the way to build AI assistants that can move between virtual and physical worlds.

Meta also announced a new initiative called Project CAIRaoke, an end-to-end neural model for building on-device assistants, that enables people to have more natural and contextual conversations with voice assistants.

“With models created with Project CAIRaoke, people will be able to talk naturally with their conversational assistants, so they can refer back to something from earlier in a conversation, change topics altogether, or mention things that rely on understanding complex, nuanced context. They will also be able to interact with them in new ways, such as by using gestures,” the company said in a blogpost.

Meta said that a Project CAIRaoke-based assistant is being used in its video calling device Portal and they aim to integrate it with future augmented and virtual reality devices to enable immersive, multimodal interactions with assistants in the future.

Apart from this, Zuckerberg also showcased a new AI concept called ‘Builder Bot’ that enables people to create a virtual world by describing them with a few voice commands.

As an example, he showed a pre-recorded demo clip that showed Zuckerberg’s VR avatar creating a beach scene and adding 3D elements such as clouds, islands, trees, a picnic blanket, table with stereo playing tropical music along with sounds of some waves and seagulls.

“As we advance this technology further, you’re gonna be able to create nuanced worlds to explore and share experiences with others with just your voice” he said at the event. MoneyControl

Click to comment

You must be logged in to post a comment Login

Leave a Reply

Copyright © 2024 Communications Today

error: Content is protected !!