A year into the vibe coding trend, developers are shipping more code but burning out faster. Here's what broke and how to fix your workflow.

Understanding the real costs of AI-assisted development lets you capture productivity gains without destroying team sustainability or future capability.
Signal analysis
Here at Lead AI Dot Dev, we tracked the emergence and evolution of 'vibe coding' - the practice of letting AI handle implementation while developers focus on intent and direction. When Andrej Karpathy introduced the concept in February 2025, it promised to unlock developer velocity. One year later, the data tells a more complicated story.
The productivity metrics are real. Developers using vibe coding techniques report 40-60% increases in feature velocity. Code commits per developer per week have climbed. Time-to-ship metrics look genuinely impressive on dashboards. But underneath those numbers, something else is happening: developers are exhausted.
The analysis from dev.to reveals the core problem isn't the tooling - it's that teams have mistaken velocity for sustainability. When AI eliminates the friction of implementation, what gets compressed isn't waste. It's rest. Developers now fill the freed-up time with more features, more refactoring, more 'while we're at it' work. The cognitive load didn't decrease; it redistributed.
What changed most dramatically is the texture of developer work. The thinking parts - architecture decisions, testing strategy, edge case handling - now run parallel to coding instead of sequential. This creates constant context switching at a cognitive level, even when developers feel they're working on one thing.
The vibe coding model works when applied selectively. Developers using it for 20-30% of their tasks see genuine benefit - faster iteration on well-defined problems, fewer typos, better initial implementations. But teams that adopted it wholesale - treating every task as a candidate for AI-first development - hit a wall around month eight to ten.
The sustainability crisis emerges from three compounding factors. First, AI-assisted code generation creates a validation burden. Someone still needs to review, test, and reason about every output. That reviewer is now simultaneously the person who set the vibe, meaning they carry all the context. Second, technical debt accrual accelerates. Faster shipping means more accumulated quick decisions. Code that's 'good enough' ships faster and spreads wider before anyone can refactor. Third, the skill atrophy is real. Junior developers spending 60%+ of their time directing AI rather than writing code miss the fundamentals of problem decomposition and debugging that come from working through implementation details.
Teams building on these unsustainable patterns are now facing a reckoning. Some are seeing increased incident rates. Others report that new developers struggle to understand existing codebases because the rationale for implementations got lost in the AI-translation layer. The most telling metric: teams that leaned hardest into vibe coding are seeing higher voluntary turnover rates, especially among mid-level and senior engineers who find the work less engaging.
This isn't a reason to abandon AI-assisted development. It's a reason to be intentional about where and how you use it. The builders finding success with vibe coding share a common pattern: they've created boundaries. They designate certain work as AI-first and other work as hand-coded, based on strategic value and learning requirements.
Start by auditing which tasks should remain human-driven. Anything touching core business logic, architecture decisions, or learning-critical code paths should have humans in the primary role, with AI as an assist. Save vibe coding for well-bounded, well-understood problems - boilerplate infrastructure, test generation, documentation, repetitive refactoring. This isn't conservative; it's sustainable.
Second, rebuild your review processes. If your code review is still designed for catching syntax errors and obvious bugs, you'll miss the systemic problems that vibe-coded work creates. You need to review for reasoning - did someone think through the edge cases, or did the AI suggestion just look reasonable? That's a different kind of review, and it takes more senior attention. Allocate accordingly.
Third, protect dev time for non-AI work. This sounds obvious but teams aren't doing it. Block off percentage of your sprint for deep work, for learning, for the kinds of problems that can't be vibed. That's where your senior engineers actually grow your capabilities. That's where junior engineers build foundational skills. Without that protection, your 'vibe coding win' becomes your organizational capability loss. The dev.to analysis makes this clear - what looks like a productivity win at month three becomes a sustainability problem at month twelve.
Thank you for listening, Lead AI Dot Dev.
Best use cases
Open the scenarios below to see where this shift creates the clearest practical advantage.
One concise email with the releases, workflow changes, and AI dev moves worth paying attention to.
More updates in the same lane.
Cognition AI has launched Devin 2.2, bringing significant AI capabilities and user interface enhancements to streamline developer workflows.
GitHub Copilot can now resolve merge conflicts on pull requests, streamlining the development process.
GitHub Copilot will begin using user interactions to improve its AI model, raising data privacy concerns.