Because actually making art is for suckers.
Analysis & Context
Because actually making art is for suckers. Google Launches Music Generation Model to Make Songs 30-Seconds at a Time. Stay informed with the latest developments and expert analysis on this important story.
Because actually making art is for suckers.
Need an AI-generated soundtrack to go with your AI-generated video that you’re planning to send to your AI-generated friends? Google has you covered. The company announced today that Lyria 3, its music generation model, will be available to use in its Gemini app. The feature is still in beta, but it’ll be available over the coming days to all users with the Gemini app, so long as they are 18 years of age or older. Out of the gates, users will be able to generate songs in English, German, Spanish, French, Hindi, Japanese, Korean, and Portuguese, with more languages planned for the future. For now, free subscribers will have their music “creations” capped at 30 seconds—so basically an iTunes preview length of song time. Google AI Plus, Pro, and Ultra subscribers will get higher limits, though the company didn’t specify what that means. But you can assume that you’re probably not going to be able to do your version of “Free Bird.” Lyria 3 is the latest generative AI model to make it out of the DeepMind lab, and the first version of Lyria to get a wider public release. Previous models were available to musicians via Google’s Music AI Sandbox, a suite of tools that Google launched to figure out how AI could be used in the music creation process. Lyria was also available to some YouTube creators with a feature designed to turn speech into song. This latest version allegedly improves on several areas where previous models struggled, per Google. First, it now generates its own lyrics, so you can offload the difficult task of figuring out what rhymes with “orange.” It also gives users more control over elements of the song, like style, tempo, and vocals. Finally, Google claims Lyria 3 can create “more realistic and musically complex tracks.” Users who decide to make Lyria 3 the only instrument they know how to play will be able to offer either a text-based prompt or upload images and videos to ask the model to create a track based on visual prompts. The app will spit out a 30-second track, complete with album art that’s also AI-generated by Google’s Nano Banana model. “The goal of these tracks isn’t to create a musical masterpiece, but rather to give you a fun, unique way to express yourself,” the company said. (Seems like a nice way to say some of these songs are gonna be bad.) Per Google, all outputs from Lyria will be embedded with SnythID, the company’s watermark for identifying AI-generated content. While that won’t be immediately visible the way watermarks on images or video are, users will be able to upload an audio clip to Gemini and have the app determine if the SnythID is present. So even if you don’t use Gemini to spit out some songs, at least you can use it as a slop detector.