← Blog·workflow

Reading Code You Didn't Write Is the New Default

AI tools have shifted the work. Most of a developer's day is now archaeology, not authorship. Here is how to get good at it.

By Kristian

For most of software's history, a developer's job was described in one word: write. You wrote code. You wrote tests. You wrote documentation. Reading code existed, sure, but it was a supporting activity — something you did in service of the writing.

That balance has quietly inverted. In a stack where AI can produce a credible first draft of almost anything, the scarcer skill is no longer composition. It's comprehension. The developer who can read a thousand-line diff and know what's load-bearing, what's noise, and what's quietly broken — that developer is now more valuable than the one who can type fast.

We don't train for this. We don't talk about it. Most developers treat reading as a frustrating prelude to the "real work" of writing. That framing is obsolete.

What reading actually is

Reading code is not linear. You don't start at line one and proceed. You don't even read most of the code. A skilled reader is triaging: deciding which parts of the codebase matter for the question at hand, and ignoring the rest.

Good readers have a mental taxonomy:

  • Orientation material. File and folder names, export lists, README, tests. Answers: where am I?
  • Contract material. Type signatures, interface definitions, schema files. Answers: what does this thing promise?
  • Logic material. The actual function bodies and control flow. Answers: what does this thing do?
  • Incidental material. Formatting, convenience wrappers, one-off utilities. Answers: nothing useful. Skip.

Inexperienced readers treat all four categories the same — they read everything with equal care and burn hours. Experienced readers spend 80% of their time in the first two categories and only dive into logic when they've narrowed the question enough that the logic is the answer.

Reading code is mostly the art of not reading code.

Why AI-generated codebases reward readers

Before AI, a typical repository was authored by a small team with shared habits. You read a file and you could tell it was written by Priya, because Priya favors early returns and explicit variable names. That consistency made reading fast — the structure of a file telegraphed its intent.

AI-generated codebases don't have this property. They are stylistically averaged. Every file reads like every other file, in a way that is technically fine but cognitively flat. The visual cues that tell you "this file is complicated, slow down" or "this file is a thin wrapper, speed up" are all smoothed to the mean.

This makes reading harder in a new way. You can't rely on pattern-matching on stylistic tells. You have to rely on the actual content. The readers who thrive in AI-heavy codebases are the ones who were already skeptical of pattern-matching — who formed their mental model by asking "what does this do?" rather than "what does this look like?"

The orientation move

The single highest-leverage skill in reading unfamiliar code is the orientation move: the first ten minutes with a codebase you've never seen.

A good orientation move looks like this:

  1. Open the root README. Read it fully. Note the stated goals.
  2. Open package.json (or Cargo.toml, or pyproject.toml). Read the scripts and dependencies. The dependencies are the codebase's vocabulary.
  3. Open the entry point. For a Next.js app, that's app/layout.tsx and app/page.tsx. For a CLI, it's the bin/ entry. For a service, it's wherever the HTTP server is bound.
  4. Follow one golden path. Pick the most common user flow and trace it from entry to exit. Don't detour. Don't look at edge cases.
  5. Now, and only now, read a test. Tests tell you what the authors consider behavior-preserving. They reveal intent better than comments do.

Ten minutes of this gives you a working map. You don't need to understand everything — you need to know where things are, what they're called, and which direction to look when a specific question appears.

Most developers skip this step. They dive straight into the file that contains the bug they're chasing, and they spend an hour bewildered because they don't know how the file relates to anything else. The orientation move is the cure.

Good reading tools

Reading well is also a tooling question. Some obvious wins, often under-used:

Go-to-definition and find-references. Not optional. If your editor doesn't do these fast and correctly, fix the editor before touching the code.

Quick file navigation. Cmd+P or equivalent. If you are reading a codebase, you will open fifty files. Typing paths is a waste of time.

Symbol search across the project. Different from text search. Symbol search knows that handleClick is a function and handleClick in a comment is not.

Git blame with context. Not for finding who to blame — for finding the commit that introduced a line and reading its message. Commit messages are often the only real documentation a codebase has.

AI code assistants used for reading, not writing. Paste a function, ask "what does this do and why might it exist?" The answer is usually right, and it saves you the cost of building the mental model from scratch. Just verify anything load-bearing before you trust it.

Signs of a reader

How do you tell if someone is a strong reader? A few tells:

  • They ask "where does this come from?" before they ask "how do we fix this?"
  • They read commit messages. All of them. Without being prompted.
  • They notice dead code. An experienced reader spots unused imports and orphaned functions the way a proofreader spots typos.
  • They can describe the shape of a codebase they've only spent an hour in. Not the details — the shape. "It's an API with a thin HTTP layer, a big domain module, and about fifteen adapters for external services."
  • They don't panic when handed a ten-thousand-line file. They know the file is mostly boring and the interesting parts are findable.

The work that's left

There's a common worry that AI code generation will erode developer skill. I think the opposite is true — the skills that remain scarce are the ones AI cannot readily do. Writing a CRUD endpoint from scratch is something any modern tool can do. Reading an unfamiliar codebase and deciding which of three plausible approaches actually fits the existing architecture — that's cognitive work that does not meaningfully automate.

The developers who get noticed in the next few years will not be the fastest typists. They will be the clearest readers. They will walk into a repository and, within a couple of days, understand it better than people who have been working in it for a year.

Reading used to be the supporting activity. It is now the main one. Train accordingly.