Historically, software engineering interviews have been dominated by questions where candidates solve problems by coding solutions using pre-memorized data structures and algorithms. These "LeetCode interviews" are marketed as the gold standard for assessing a candidate's "ability to code." But that's hard to believe when the format is so far removed from the actual job of a software engineer.
Most engineers I worked with at Google came to the same conclusion: LeetCode tests are mostly a proxy for an IQ test. They assess a candidate's ability to study a subject and pattern match what they've learned during the interview. This is a necessary trait for a successful engineer (being smart and able to learn) but it isn't sufficient. I knew plenty of high-IQ people at Google who weren't particularly productive or innovative. They got in because they aced LeetCode problems, but were never screened on their ability to do great work.
With the rise of AI coding, the job of a software engineer is now even further removed from LeetCoding. AI can solve these problems instantly. Even if you encountered a situation where you had to add something like an LRU cache to your app's backend, you would have AI do it. You wouldn't write it yourself. I suspect candidates will become increasingly frustrated with companies that require them to code these solutions by hand. This is especially true for talented senior developers who already spend their free time building better software with AI. It's like showing up to a secretary job in 1995 and being asked if you know how to service a typewriter. You should be spending your time learning how to use a computer.
Can't we come up with an interview format that assesses both IQ and an engineer's ability to actually do the job?
I think we can.
The Take-Home Test
Take-home assessments have been getting bad press. They've been called "free work" or disrespectful of candidates' time. In the past, I would have agreed. Giving 50 candidates a six-hour project with the intention of hiring one developer is wasteful at best and unethical at worst. It also discourages great candidates who value their time from moving forward.
Some companies try to solve this by paying candidates. PostHog, for example, pays candidates $1,000 to work with them for a day. I tried this at my past startup. While I didn't pay as much, I got great feedback from candidates who were used to receiving nothing for their effort.
Another option is to shorten the take-home test. Give candidates something fun they can build in two hours. This respects their time and gives them a sense of the problems your company works on. Before AI, it was hard to design a task that was both short and high signal. Now it's much easier. It's possible to build a relatively complex web application in two hours.
This also provides a useful signal: whether the candidate knows how to work with AI. The task should be impossible to complete in the time limit without AI, and very difficult if the candidate doesn't know how to use AI well.
The "Technical" Interview
I call it a "technical" interview because it should still include cultural questions that reveal how a candidate works and thinks.
Instead of LeetCode-style questions, the interview begins by having the candidate open Cursor and select a fast, less intelligent model (I borrowed this idea from Brendan Falk). An early GPT model or Composer 1 works well. We start in Cursor because I want to see how the candidate navigates a codebase and reads real code. We use a faster, less capable model because I want them to move quickly and verify their work.
Together, we build a small application or service. Along the way, I ask about their architectural decisions. I want to hear their thoughts before they turn to the model for answers. After about 30 minutes, we should have a small working app. By then I have a much better understanding of how quickly they can build, think, and reason.
In the final 10-15 minutes, I ask about their side projects, projects they've led at work, and how they make decisions. I'm trying to understand their level of agency, which I believe is the most important factor for success on my teams.
Interviewers and candidates are currently in an arms race. A quick search for "AI interview assistant" or "AI cheating detector" returns hundreds of tools. There are companies helping candidates cheat on interviews, and others claiming to detect that cheating. This is a road to nowhere.
Companies need to take a different approach. The days of virtual LeetCode interviews are over. Large companies are already moving back to in-person interviews to maintain control over the process. They want to keep asking LeetCode questions.
Maybe it's just me, but I'd rather work with someone who can build an entire backend in a day using Claude agents than someone who can solve Word Ladder by hand.