Meta the tech giant also renowned as Facebook has introduced an open-source AI-powered tool named “AudioCraft”. The tool is capable of creating music and audio based on text prompts. This AI tool consists of three models. As per the release of Meta, these three models in Audiocraft include:
- MusicGen: This model of AudioCraft is capable of generating music from text prompts by using licensed music data.
- AudioGen: The model can generate various audio effects, such as environmental sounds and sound effects from text prompts using publicly available sound effects data.
- EnCodec: It is a decoder that enhances music generation by reducing artifacts, leading to higher-quality results.
Meta has claimed that MusicGen has been specifically trained in licensed music and can generate music using text prompts. While Audiogen and EnCodec are trained on public sound effects to generate audio using text and to allow users to experience high-quality music generation respectively.
“The AudioCraft family of models is capable of producing high-quality audio with long-term consistency, and they’re easy to use. With AudioCraft, we simplify the overall design of generative models for audio compared to prior work in the field — giving people the full recipe to play with the existing models that Meta has been developing over the past several years while also empowering them to push the limits and develop their own models,” says Meta in its official release.
Meta has introduced Audiocraft as a convenience tool for artists and individuals who wish to develop better sound, music, or algorithms that can enhance the quality of music generation.
Earlier this year, Google also introduced its AI model named “MusicLM” which is capable of turning text prompts into music. It is an experimental tool that is being offered through AI Test Kitchen, which serves as Google’s testing platform. Although, these AI tools require technical skills to utilize the tool to its full potential. They are not intended for regular use or replacement of artists as they are still in their experimental stages.