I've been interviewing engineers for fifteen years. In the last eighteen months, something shifted. Candidates are better prepared, their take-home projects are more polished, and their code samples are cleaner. On the surface, the talent pool looks stronger than ever. Below the surface, it's harder than ever to tell who actually understands what they built.
The new baseline
AI coding tools have raised the floor. A junior engineer with Copilot can produce code that looks like a senior engineer's output. That's genuinely good — it means more people can be productive faster. But it also means the signals I used to rely on in interviews have degraded. Clean code is no longer a reliable proxy for deep understanding.
What still matters
Taste. The ability to look at a working solution and say "this works, but it's not right" — and then explain why. That's always been the difference between a good engineer and a great one, and AI hasn't changed it. If anything, it's amplified it: the engineer who can shape AI output into something elegant is more valuable than ever.
What we test for now
We've added a "code review" stage to our interview. We give candidates a working pull request — written by AI, with deliberate issues — and ask them to review it. The issues range from subtle (a race condition that only matters under load) to philosophical (an abstraction that's technically correct but architecturally wrong). The best candidates catch both kinds.
The uncomfortable truth
Some of our best hires in the last year have been people who use AI tools less than their peers. Not because they're luddites — because they're fast enough without them, and they've developed an instinct for when AI output is subtly wrong. That instinct is built on years of writing code by hand, and I don't know how to shortcut it.