Vibe Coding Already Won

Cross-posted on: Substack

Vibe coding already won. If you are still debating whether it’s a good idea, it’s time to reframe the question.

Just as DevOps helped us to frame the question: What do I need to do in order to release our code to production multiple times a day? Now we need to ask: What do I need to do in order to ensure production quality of AI generated code?

The two questions are quite similar. The things that bogged down software release cycles so that teams were unable to release frequently are the same things that concern engineers and managers with regard to AI generated code. Quality assurance. Performance. Security. Maintainability.

The critical understanding is that success in Vibe Coding is very similar to success in coordinating releases among large teams of engineers. That is to say the problems introduced by Vibe Coding are mostly the same problems that DevOps is made to solve. It’s just that the rate of change and throughput are much faster. The quality of code generated by LLMs is sufficiently good that the bottleneck is now at the process and sensemaking level. This is the level that the principles and practices of DevOps brought massive improvements and allowed companies to ship better code, faster, with fewer failures. We can double down on these practices, and perhaps discover new ones, that will provide similar benefits for generative code pipelines.

If your organization has not made the transformation to a DevOps culture and mindset yet, you must start there. If you have that critical foundation in place, then you can start to leverage it towards the massive productivity boost that can come from appropriate use of agentic coding.

I’ve been reading an excellent book on this topic: Vibe Coding by Gene Kim and Steve Yegge. Gene Kim is also co-author of The DevOps Handbook, The Phoenix Project, and many other influential works, placing the Vibe Coding book as the next installment in one of the most definitive and transformative series ever written on the subject of shipping good code to production. They also gave some talks at the recent AI Engineer conference:

Things that really stand out from these talks with Gene Kim and Steve Yegge:

  1. Anecdotal data from OpenAI and Anthropic shows that Vibe Coding is critical to success in those teams. OpenAI claims that their developers who use AI to develop can be 10 times more productive than those who don’t, despite equivalent expertise and experience otherwise.
  2. It’s not about just firing and forgetting. Vibe Coding takes skill, practice, and strategy. You need to spend 2,000 hours learning to do it properly. It’s a new, distinct skill set that needs to be developed.
  3. Skills like task decomposition and successive refinement, coordinating multiple agents in parallel, and the ability to navigate the high cognitive overhead of current tools (too much output), are important to success.
  4. It’s all about processes. It’s all about organization, prioritization, and appropriate delegation. One critical bottleneck is reducing coordination costs. Agents can generate massive code diffs in parallel, and then they need to be merged successfully.
  5. The Claude Code agentic coding model is compared to one giant ant doing work in a linear fashion, where what is really needed is a swarm of small ants. Why are we using the big expensive model for every little tweak and question? We should be using smaller models with more parallelism.

This reframes Vibe Coding as systems engineering, not “talking to a chatbot”.

One of the core issues in the adoption of these practices, just as it was in the adoption of DevOps, is culture. People have done it the way they have done it for many years, and it is hard to change. If you have spent 15 plus years honing your skill as an engineer to write excellent software, then the idea of a software agent writing those lines of code for you is a hard pill to swallow. There is an element of control that must be given up.

A key realization, and one that requires a perspective shift, is that from the standpoint of the business it has never been specifically just about the quality of the code. It has always been about the product. Does the code do what it needs to do? Does it perform well? Is it secure? These are all external metrics to measure code, fundamentally they don’t care about where the code came from. Humans are quite fallible. In order to consistently ship high quality code we have had to build systems that enable us to trust the code in spite of our fallibility, catch the issues that arise from unconsidered edge cases, plug the security holes that were not identified during implementation and review. We employ tools like static analysis, container security scanning, continuous integration, automated testing. We have organizational processes like code review, architectural review, communication with stakeholders. All of these things have to be adjusted to the new paradigm. They need to be sped up, automated, or re-delegated to the most appropriate team member or AI tool. Things need to shift around.

All of this is change, and change makes people nervous. In order to successfully navigate change, you have to understand what your baseline is. You need a solid organizational understanding of the key inputs and outputs. Look at your organizational chart like you would look at the architectural diagram of a software system. What are the flows of information? What are boundaries? Where is there tight coupling that can be decoupled? Where can you create an interface that allows the underlying implementation to change, while maintaining stability in the overall system? People are not software, but Conway’s Law makes the case that organizational structure is crucial to software outcomes, and that the organization’s agility and ability to change at a structural level will have a huge impact on its ability to adapt new processes and practices necessary to scale and leverage the new generation of tooling.

Another key realization is that these things don’t have to happen all at once. It can be an iterative process. Set targets and work towards meeting them, then reanalyze and set new ones. What’s working well today may not work well tomorrow. Fast changing times, fast changing tools, fast changing technology. This all requires the ability to continually iterate and adapt to the changing state.

The economics of vibe coding are becoming clear. To remain competitive, you’re going to need to understand how to do this well.