Technology

Meta Unveils New AI Models and Tools to Drive Innovation

Meta, the owner of Facebook, announced on Friday that it was releasing a batch of new AI (Artificial Intelligence) models from its research division, including a “Self-Taught Evaluator,” which could reduce the need for human involvement in the AI development process. Meta’s Fundamental AI Research (FAIR) team introduced a series of new AI models and tools aimed at advancing machine intelligence (AMI). Notable releases include Meta Segment Anything (SAM) 2.1, an updated model designed for improved image segmentation, and Meta Spirit LM, a multimodal language model that blends text and speech for natural-sounding interactions. Meta claims that Meta Spirit LM is its first open-source multimodal language model that freely mixes text and speech.

Meta Unveils New AI Models and Tools to Drive Innovation

Table of Contents

New AI Models and Tools from Meta FAIR

Other innovations include Layer Skip, a solution that accelerates large language models (LLMs) generation times on new data, and SALSA, a tool for testing post-quantum cryptography. Meta also released Meta Open Materials 2024, a dataset for AI-driven materials discovery, along with Meta Lingua, a streamlined platform for efficient AI model training.

Meta Open Materials 2024 provides open-source models and data based on 100 million training examples, offering an open-source option for the materials discovery and AI research community.

The Self-Taught Evaluator is a new method for generating synthetic preference data to train reward models without relying on human annotations. Reportedly, Meta’s researchers used entirely AI-generated data to train the evaluator model, eliminating the need for human input at that stage.

“As Mark Zuckerberg noted in a recent open letter, open source AI “has more potential than any other modern technology to increase human productivity, creativity, and quality of life,” all while accelerating economic growth and advancing groundbreaking medical and scientific research,” Meta said on October 18.

Launch of Meta Movie Gen

Earlier, on October 4, Meta introduced Movie Gen, a suite of AI models capable of generating 1080p videos and audio from simple text prompts. These models generate HD videos, personalised content, and precise edits, outperforming similar industry tools, according to Meta. Movie Gen also supports audio syncing to visuals. While still in development, Meta is collaborating with filmmakers to refine the tool, which could have future applications in social media and creative content.

“Our first wave of generative AI work started with the Make-A-Scene series of models that enabled the creation of image, audio, video, and 3D animation. With the advent of diffusion models, we had a second wave of work with Llama Image foundation models, which enabled higher quality generation of images and video, as well as image editing. Movie Gen is our third wave, combining all of these modalities and enabling further fine-grained control for the people who use the models in a way that’s never before been possible,” Meta said.

Movie Gen has four key capabilities: video generation, personalised video generation, precise video editing, and audio generation. Meta says that these models are trained on a combination of licensed and publicly available datasets.

Meta says it continues to improve these models, which are designed to enhance creativity in ways people might never have imagined. For instance, users could animate a “day in the life” video for Reels or create a personalised animated birthday greeting for a friend to send via WhatsApp, all using simple text prompts.

Collaboration with Filmmakers for Movie Gen

Continuing, on October 17, Meta announced that, as part of a pilot program, it is collaborating with Blumhouse and other filmmakers to test the tool before its public release. According to the company, early feedback suggests that Movie Gen could help creatives quickly explore visual and audio ideas, though it is not intended to replace hands-on filmmaking. Meta plans to use feedback from this program to refine the tool ahead of its full launch.

“While we’re not planning to incorporate Movie Gen models into any public products until next year, Meta feels it’s important to have an open and early dialogue with the creative community about how it can be the most useful tool for creativity and ensure its responsible use,” says Connor Hayes, VP of GenAI at Meta.

“These are going to be powerful tools for directors, and it’s important to engage the creative industry in their development to make sure they’re best suited for the job,” added Jason Blum, founder and CEO of Blumhouse.

Meta is extending the Movie Gen pilot into 2025 to continue developing the models and user interfaces. In addition to collaborating with partners in the entertainment industry, Meta plans to work with digital-first content creators, the company said.

Nadia24x7Official

Stay updated with today's news on Nadia24x7.in. Get the latest updates on India, World, Sports, Entertainment, Business, Auto, Politics, Tech, and more.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button