Visual Taste Is a Moat AI Doesn't Cross
Generative tools have collapsed the cost of producing plausible design. They have not collapsed the cost of producing good design. The difference is taste, and it remains defensible.
A pattern is emerging in product work. Teams use AI to generate an app. The output is recognizable at a glance as AI-generated. Not because it's wrong — it's often technically fine — but because it has a particular aesthetic flatness. The spacing is mid. The type hierarchy is mid. The color choices are mid. The layout feels like a stock illustration. Everything works; nothing sings.
Meanwhile, a smaller number of teams use the same tools and produce output that looks considered. The same AI, the same stack, and their app feels like someone cared about it. The difference is not the tool. It's the human operating the tool.
That difference is taste. And in a world where anyone can generate plausible design in seconds, taste is the thing that still differentiates shipped work. It's the one scarce input AI hasn't neutralized, and it's unlikely to neutralize any time soon.
What taste actually is
"Taste" gets tossed around imprecisely. Sometimes it means aesthetic preference. Sometimes it means status. Sometimes it means "I can't explain why this is better."
The useful definition: taste is the ability to detect small deviations from the intended effect.
A designer with taste looks at a page and notices that the spacing between the title and the subtitle is a few pixels too tight. Not because they measured it. Because the balance is off and they can see it. They don't have to know the rule they're applying — they are directly perceiving that something is not quite right.
This is a skill. It's trainable. It's not mystical, and it's not innate. But it requires years of looking at a lot of design and noticing when it works and when it doesn't. There are no shortcuts, and AI has not found one.
Why AI flattens
Generative models are optimizers. Given a training distribution, they produce outputs near the mean of that distribution. If the training data is "competent but unremarkable web design," the outputs are competent but unremarkable web design. If the training data is "award-winning visual craft," the outputs are plausibly-award-winning-looking but not actually awarded.
There's no training distribution for "design that feels right in this specific context." Context matters, and context is particular to the project, the audience, the brand, the moment. AI cannot produce context-sensitive taste because it does not have a specific context — it has a blurred average of all contexts.
The practical consequence: AI-generated design is always a draft. Sometimes a good draft. Never a finished piece. The finishing is where taste shows up, and the finishing is where competitive differentiation now lives.
The specific failures
AI-generated visual work has characteristic tells. A partial list:
Over-rounded corners everywhere. The AI has learned that modern UIs have rounded corners, so it rounds everything uniformly. The result is a lack of hierarchy — when every surface has the same radius, no surface feels like the primary one.
Gradients that don't serve. Gradients are aesthetically current, so the AI includes them. But gradients should direct attention and build depth. AI gradients are usually just... there. They don't relate to what the user is looking at.
Flat typographic hierarchy. Everything is either 16px or 24px. The gradation that makes a page feel organized — the subtle size shifts that differentiate page title from section title from body from caption — is compressed or absent.
Generic icon choices. When in doubt, the AI picks the most obvious icon. A document icon for "documents." A gear for "settings." This is fine until you notice that every app produces the same icon set, and the resulting interface feels like stock photography.
Inconsistent spacing. The AI knows about spacing as a concept. It doesn't necessarily apply a single scale. So a page has 16px between some elements and 18px between others and 12px between others, and nothing sits on a rhythm.
Centered layouts by default. Centered layouts are safe, so the AI defaults to them. A tasteful designer would center some things and not others, based on the specific content and the visual hierarchy they want. The AI centers most things because centering has a lower failure rate on average.
Where this leaves human designers
The role of the designer, in the AI era, is not to produce plausible design. Anyone can do that. The role is to finish the plausible design — to take the draft output and push it past "competent" into "considered."
This is often seen as a diminishment. It is, in fact, a consolidation of value into the part of design that is hardest to automate. The grunt work of producing a first draft — spacing out a page, picking a set of components, choosing a color — is now cheap. The craft work of making the result actually good is, by contrast, a more concentrated application of skill than it was when a designer had to also produce the draft.
The analogy I keep returning to: photography. When cameras became cheap and automatic, most people assumed professional photography was doomed. In fact, the opposite happened. The value of professional photography shifted from "someone who can operate the equipment" to "someone who can see and compose well." The equipment became a commodity; the eye did not. Professional photographers today command higher real hourly rates than they did when the cost of the equipment was the entry barrier.
Design is going through the same transition. The equipment — the ability to produce a plausible draft — has become cheap. What's scarce is the eye, and the eye is newly valuable because its scarcity is more concentrated.
How to build taste
If taste is trainable, and if it matters more now than before, how do you actually build it?
Look at a lot of work, critically. Every day. Not on Instagram-style scrolling, but with attention. Open up five well-designed apps and five poorly-designed ones, and try to articulate what's different. The articulation is the training.
Copy deliberately. Not to claim the work, but to internalize the decisions. Reimplement a screen from a product you admire. You will discover choices you didn't consciously notice. Copying is how craftspeople have trained for centuries; it is not diminished by being an old technique.
Keep a reference library. When you see something that works, capture it. Label it. Know why it works. A reference library is a structured memory; structured memories become intuition over time.
Ship things and watch them age. Nothing trains taste like looking at your own work six months later. You will see what your past self did not. That gap is the growth.
Read designers, not just about design. Writing about design tends to be abstract. Designers talking about specific projects, specific decisions, specific tradeoffs — that's where the teachable craft lives.
The strategic point
For teams building products: taste is now a competitive advantage that compounds. A team with a strong designer will produce work that a team without one cannot match, even with identical tools. The gap is growing, not shrinking, because the AI floor is rising everyone's baseline — but the ceiling is only reachable by people who can see.
For individual designers and developers: the taste-based parts of your work are the parts that do not get cheaper. Investing in them is investing in durable skill. The parts that do get cheaper — generating a first draft of a layout — are worth less than they used to be, not because the skill is less valuable, but because AI has made it less scarce.
The safe career move is the one that has probably always been the right move: get better at noticing. The work that remains, after generative tools have taken everything they can, is the work of people who can tell that something is almost-right and not right, and who know what to do about it.
Taste is a moat. AI has not crossed it. Build on the side of the moat that taste defends.