← Front Page
AI Daily
Developer Tools • Monday, March 16, 2026

Spotify's Best Developers Haven't Written Code Since December. Now What?

By AI Daily Editorial • Monday, March 16, 2026

Spotify dropped a detail in February that has been bouncing around engineering circles ever since: the company's best developers haven't written a single line of code since December. They're directing AI agents, reviewing outputs, and shipping features — but the act of typing code has become optional. This isn't a fringe experiment. It's a deliberate operating mode at one of the world's largest and most technically sophisticated software companies, and it raises a question the industry is only beginning to grapple with: if expert developers stop writing code, what happens to the next generation of expert developers?

Anthropic's own research on how AI assistance affects coding skill formation gives a measured but pointed answer. When developers use AI to complete tasks quickly — particularly tasks they haven't yet mastered — they learn less than developers who work through the same problems manually. The productivity gain is real and immediate; the skill formation loss is diffuse and delayed, showing up months or years later when a developer tries to tackle a problem that the AI gets wrong and they lack the foundation to catch it. This isn't unique to coding — it's a general pattern in how humans learn — but software development is where it's playing out most visibly right now because AI coding tools are both extremely capable and extremely widely adopted.

The open source ecosystem is seeing this dynamic in a different form. TechCrunch's February investigation found that major open source projects are noticing a decline in the average quality of submitted contributions, with maintainers attributing it to AI tools lowering the barrier to submitting code without fully understanding it. The volume of pull requests is up; the signal-to-noise ratio is down. Maintainers who depend on volunteer labour now find themselves spending more time reviewing and rejecting AI-assisted submissions that almost work but don't quite. That's a hidden cost that rarely shows up in productivity statistics.

Microsoft's coverage of "vibe coding" — a term for building software by describing what you want in natural language rather than writing traditional code — adds another dimension. The tools are real and genuinely useful for building simple applications, and they're enabling people who would never have learned to code to create functional software. That's a genuine democratisation of a previously gatekept skill. The question is whether "vibe coding" and production engineering are on the same continuum or whether they're fundamentally different activities — and whether conflating them creates false expectations about what AI-assisted development can reliably deliver.

Employees at Anthropic report using Claude in roughly 60% of their work and estimate a 50% productivity boost — a striking number, though it comes from self-reporting at the company that makes the tool being measured. The honest picture of AI coding tools in 2026 is that they provide clear, measurable productivity gains for experienced developers working on well-scoped problems, while simultaneously creating new pressures on skill formation, code quality in collaborative environments, and the traditional apprenticeship model through which junior developers have always learned. Whether those pressures resolve into something better or worse than the status quo is not yet clear.

Sources