← Blog·opinion

The Visual Refinement Gap: Why AI-Generated Code Needs a New Kind of Tool

AI gets you 80% of the way there. The last 20% — visual polish — is where you actually live. Here's why existing tools don't solve it, and what visual refactoring means.

By Kristian

Something interesting happened in 2025. AI code generation tools got good enough that millions of developers started using them as their primary way to build web applications. Claude Code, Cursor, v0, Bolt, Lovable — the toolchain for going from idea to running app collapsed from weeks to minutes.

And then a new problem appeared. One that didn't exist before.

The 80/20 wall

If you've used any AI coding tool for more than a toy project, you've hit this wall. The AI generates something that is:

  • Structurally sound
  • Functionally correct
  • Visually... almost right

The layout works but the spacing feels off. The components are in the right order but the visual weight is unbalanced. The colors are technically from your theme but the overall impression is "template." A survey by Stack Overflow found that 66% of developers cite "AI solutions that are almost right, but not quite" as their top frustration with AI coding tools.

This is the visual refinement gap: the distance between what AI generates and what you'd be proud to ship.

Why the gap exists

AI models are excellent at structure. They understand component hierarchies, data flow, routing, and API patterns. They're trained on millions of codebases and they can reproduce complex patterns reliably.

What they struggle with is taste. The visual decisions that make software feel considered:

  • Spacing relationships — Is this gap-4 or gap-6? The answer depends on the visual rhythm of the surrounding elements, which the AI can't see.
  • Typography weight — Should this heading be font-semibold or font-bold? It depends on the contrast with the body text and the overall density of the page.
  • Alignment subtleties — Is this element optically centered, or just mathematically centered? (They're not the same.)
  • Color temperature — The shade of gray for muted text depends on the background color, the primary accent, and the overall mood of the interface.

These decisions are fast for a human with visual feedback. You look at the screen, you know something's off, you adjust it. The bottleneck isn't knowing what to change — it's the mechanics of making the change.

The current workflow is absurd

Here's how most developers currently refine AI-generated layouts:

  1. Generate code with an AI tool
  2. Look at the result in a browser
  3. Notice that the padding on a card is too tight
  4. Open the source file in VS Code
  5. Find the right component (sometimes hundreds of lines of JSX)
  6. Locate the Tailwind classes responsible for padding
  7. Mentally translate p-4 → 16px, decide you want 24px → p-6
  8. Save the file
  9. Wait for HMR to refresh
  10. Check the result
  11. It's still not right. Repeat from step 6.

For a single property. On a single element. Across an entire page, this process takes hours.

Some developers use browser DevTools as a visual editor — inspect the element, tweak the CSS live, see the result instantly. But DevTools changes are ephemeral. You have to manually copy the CSS values back to your source files. And Tailwind classes don't map cleanly to DevTools CSS editing.

Why existing tools don't solve this

Design tools (Figma, Sketch) create mockups, not code. They're upstream of the problem — you still have to translate the design into code and then refine the code to match. The handoff is inherently lossy.

Visual builders (Webflow, Framer) generate code from scratch. They're not designed to edit existing codebases. If you have a Next.js project that an AI generated — with its own routing, state management, and component structure — you can't "open it in Webflow."

AI-powered builders (Bolt, Lovable) are generation tools, not refinement tools. They're great at the first pass but offer limited visual control for iterative adjustment of existing code.

Code editors (Cursor, VS Code) are text-based. They can help you write Tailwind classes faster, but you still have to reason about visual properties as text.

Browser DevTools give you visual editing but no write-back to source. The changes vanish on refresh.

There's a gap in the toolchain: no tool lets you visually refine your existing source code.

What visual refactoring means

Visual refactoring is a new category of tool. The core idea:

  1. You already have code (AI-generated or hand-written)
  2. You open it in a visual editor that renders the running app
  3. You click elements, drag them, adjust styles through direct manipulation
  4. The tool writes clean, minimal changes back to your source files
  5. Your code structure, logic, and formatting are preserved

It's not code generation. It's not design. It's the layer between them — where the code that already exists gets visually refined through direct manipulation.

The opportunity

This isn't a niche problem. Anyone who uses AI to generate web applications — and that's a rapidly growing population — hits the visual refinement wall. The tool that solves this cleanly will become as essential to the AI coding workflow as the AI itself.

We built Decise to be that tool. It's a native macOS app that turns your running Next.js project into a visual canvas. But this post isn't a product pitch — it's about the problem space.

The visual refinement gap is real. The tools that exist today don't solve it. And the opportunity to build something that does is wide open.