Meta’s new learning algorithm could teach AI to multitask

[ad_1]

Data2vec is part of a big trend towards artificial intelligence. models who can learn to understand the world in more than one way. “It’s a smart idea,” says Ani Kembhavi, who studies vision and language at the Allen Institute for Artificial Intelligence in Seattle. “This is a promising advance when it comes to generalized systems for learning.”

An important caveat is that although the same learning algorithm can be used for different skills, it can only learn one skill at a time. Once he learns to recognize images, he must start from scratch to learn to recognize speech. Giving an AI multiple skills is difficult, but it’s something the Meta AI team wants to look at next.

The researchers were surprised to find that their approach actually outperformed existing techniques at recognizing images and speech, and performed alongside leading language models at understanding text.

Mark Zuckerberg already imagining potential metaverse applications. “All of these will eventually be built into AR glasses with an AI assistant,” he said on Facebook today. “It can help you cook dinner, noticing if you’re missing an ingredient, and prompt you to turn down the heat or more complex tasks.”

The main takeaway for Auli is that researchers need to get out of their silos. “Hey, you don’t need to focus on one thing,” she says. “If you have a good idea, it can actually help in general.”

[ad_2]

Source link

Leave a Reply

Your email address will not be published. Required fields are marked *