← Front Page
AI Daily
Creative AI • Monday, March 16, 2026

AI Is Democratising Creativity. Creatives Are Not Sure How to Feel About It.

By AI Daily Editorial • Monday, March 16, 2026

Google integrated its Lyria 3 music model into the Gemini app in February, letting users generate 30-second tracks from a text description or a photograph. Around the same time, it updated Flow — its AI filmmaking suite — and gave ten independent filmmakers access to Veo 3, Gemini, and its image generation tools to produce short films. The results, by most accounts, were technically impressive. The response from the creative community was predictably more complicated.

Lyria 3 represents a genuine step up in AI music generation. It can produce tracks with lyrics, handles complex compositional structures, and integrates with YouTube's Dream Track feature for creator use. Google DeepMind has published the model details separately, and the technical quality — particularly in texture and arrangement — is substantially better than what was possible eighteen months ago. For a user who wants background music for a video, or a melody to hum along to while thinking through a project, the barrier to generating something usable is now essentially zero.

The indie filmmaker story is where the tension becomes most vivid. TechCrunch's February piece followed several filmmakers who used Google's Flow suite to make films they said they couldn't have made otherwise — projects with visual ambitions that would have required production budgets they didn't have. That's a genuine and meaningful expansion of who gets to tell visual stories. At the same time, the filmmakers interviewed acknowledged something the headline captures bluntly: it's faster, it's cheaper, and it's lonelier. The collaborative process — working with a cinematographer, a composer, a sound designer — is part of what filmmaking has historically been. AI tools compress that into a solo workflow in ways that change not just the economics but the experience of making something.

The music industry's legal response to AI generation tools runs in the background of all this. Warner Music's settlements with Suno and Udio — covered in today's copyright story — represent one model: negotiate licences rather than fight training data claims to a conclusion. But the underlying question of whether AI music generators were trained on copyrighted material without permission, and whether that matters legally, is still unresolved. Lyria 3 was trained on data that Google has not fully disclosed, and the same is true of most generative media models.

What's emerging is a creative landscape with a structural split: established professionals who can use AI tools to increase throughput, and new entrants who can use them to access capabilities previously out of reach. The question of who benefits more — and who is displaced — depends heavily on which part of the market you're looking at. For the most commoditised creative work (stock music, generic imagery, basic video content), AI is already competitive with human labour. For work that depends on distinctive voice, relationships, or cultural context, the picture is less clear. What's certain is that the tools are improving faster than the frameworks for thinking about what they mean.

Sources