What Vinext teaches about AI-assisted development

Key Takeaways
  • Vinext (Cloudflare's Next.js rewrite) had almost all lines written by AI, but the "almost" is the most important part. There was constant intervention from a person
  • The process started with hours of architectural planning before a single line of code was generated
  • Over 1,700 unit tests and 380 E2E tests created a deterministic safety net around a probabilistic process
  • Automated code review by AI agents reduced people bottlenecks, but final review by a person remained essential to catch "confident" errors
  • The project cost approximately $1,100 in tokens and over 800 sessions, amplifying the professional's capability by orders of magnitude

If you haven't read the Cloudflare article about Vinext yet, I highly recommend it. It's perhaps the best practical case we have so far of AI-assisted development at scale. A rewrite of Next.js done in a week, where almost all lines of code were written by AI.

But the "almost" is the most important part of that sentence. We still need to know how to code to take the wheel at certain moments. And note: this wasn't a simple CRUD app. There was a need for constant intervention.

Let's break down what actually happened there.

Planning before everything

"The process started with a plan. I spent a few hours discussing with Claude on OpenCode to define the architecture: what to build, in what order, and what abstractions to use. This plan became our north star."

It wasn't just showing up and asking: "Rewrite Next. Make no mistakes."

There were hours of work defining architecture, build order, and abstractions. He did what he does best: deciding what to build and how to structure it. AI only came in after the direction was clear.

Determinism as a shield against probability

"Over 1,700 Vitest tests, 380 end-to-end (E2E) tests with Playwright, full TypeScript type checking via tsgo, and linting via oxlint. CI runs all of this on every pull request."

The only thing I can think here: determinism, determinism, determinism.

LLMs are probabilistic tools. Given the same prompt, they can produce different results. The way to mitigate this is to surround the process with steps that always execute the same way. Each PR went through a battery of deterministic checks. This doesn't eliminate the probabilistic nature of the LLM, but it creates a safety net around it.

Insight

"The art of using LLMs in production isn't trusting them blindly, but surrounding them with deterministic processes that reveal when they're wrong."

The challenge that remains is: how to make other processes more deterministic, including code review? Tools like SonarQube, semantic linters, and automated review agents are part of the answer.

Automated code review (but not blind)

"We also connected AI agents for code review. When a PR was opened, an agent reviewed it. When the review comments came back, another agent resolved them. The feedback loop was almost entirely automated."

Note that it wasn't the "buddy" doing all the code reviews, because he would be a bottleneck. The feedback loop was almost entirely automated. But, and this is an important "but":

"It didn't work perfectly every time. There were PRs that were simply wrong. The AI would confidently implement something that looked right but didn't match the actual behavior of Next.js. I needed to correct course regularly."

It wasn't blind trust and merge without review. PRs first went through intensive automated review and automated correction before he accepted the PR. Another layer of determinism reducing the effects of probability.

The dev was in control

"Architecture decisions, prioritization, knowing when the AI was heading down a dead end: all of this depended on me. When you give AI good direction, context, and guidelines, it can be very productive. But you still need to pilot."

First, he gave good direction, prepared the context and guidelines exhaustively. Only then did AI do the heavy lifting.

With over 800 sessions and approximately $1,100 in tokens, the project was completed. Even considering this professional's salary, it was very affordable when we evaluate the result. This exceptional person's capability was amplified by orders of magnitude.

Amplified. Not replaced.

The practical lessons

What worked
  • Hours defining what to build before asking AI to build it
  • Tests, CI, linting, type checking as a safety net
  • Automated agents + final review by a person
  • Correcting course when AI was heading down a dead end
What not to do
  • "Confident" PRs were simply wrong in some cases
  • Without clear architecture, AI produces code that seems to work but doesn't
  • AI is the most powerful tool we've ever had, but it still needs a pilot

Vinext is not proof that "AI will replace devs". It's proof that an exceptional professional, with the right tools and the right processes, can do the work of an entire team. The difference lies in the professional, not the tool.

Insight

"AI is the new multiplier. But zero times any number is still zero."


Follow me on LinkedIn

Let's exchange ideas about software engineering, architecture, and technical leadership.

Connect on LinkedIn →