One of the first interview questions I learned to give, as a new grad engineer a couple months out of college, was asking candidates how to detect a cycle in a linked list. I was taught that the “right solution” was essentially implementing Floyd’s Cycle detection algorithm.
For a while, I actually did think it was a clever interview question, one that tested intelligence as well as programming acumen.
Of course, nowadays, interview questions like that have fallen by the wayside, right next to the heap of interview questions that companies like Google employed a decade ago but have since cut from their hiring process. The rationale is simple and straight-forward: the data showed they had little predictive ability for actual job performance. There’s also the added benefit of breaking the cycle of esoteric intelligence testing, that had always seemed to be more about sating the ego of the interviewer than an actual evaluation of the candidate.
Over the years, I’ve evolved my thoughts on how to evaluate engineers1, and had the opportunity to also try out different types of interviews. I realized that eventually, the distance in levels—where they operated within an engineering organization—was inversely correlated to the quality of the evaluative process; that is, you don’t get much information having a new grad engineer interview a VP candidate, or a Senior Director interview an intern. It also makes sense for a hiring manager to be interviewing candidates who will report into them.
Which means that as I took on more senior roles myself, I end up, on average, interfacing with more senior hires. And much like how there’s a step-level change in skills when going from one major rung of the career ladder to the next, there’s also a different set of interview questions and expectations corresponding to the candidate’s seniority. It’s not just asking a principal engineer the typical algorithms technical question—calibrated for recent CS grads—and expecting them to answer it faster; the value of someone with that much experience is being able to build systems and thrive in situations completely unknown to a entry-level engineer.
In that vein, I’ve begun to look for what I call “textbook answers” out of candidates. Given the famous difficulty of interviewing and getting into the top tech companies, there’s an entire cottage industry built around getting people past the grueling interview process, the job-equivalent of SAT prep programs. The process has morphed from one that was supposed to be predicting on-the-job performance, to an abstract test that can be studied for and aced—like becoming across Floyd’s cycle detection algorithm in Cracking the Coding Interview.
The conversation becomes a lot more interesting if we’re able to go beyond the textbook answer. For one, it’s a lot closer to reality of professional software development: plans go awry, theories don’t apply cleanly to messy situations, unseen and unintended consequences arise. It also demonstrates some amount of depth and intellectual curiosity, the ability to adapt to unfamiliar situations and develop mental models. It is indicative of lived experience, understanding the limits of common knowledge and where that has to be augmented with earned wisdom.
With this approach, the key is to push far enough into spaces non-trivially encountered, but to avoid novelty for its own sake or worse, contrarian takes as lazy signifiers of intelligence. Ideally, the discussion of how a textbook answer falls short is able to explore the nooks and crannies, separating why the common approach is valid but accounting for circumstances or differences that merit tweaks. For instance, I often ask Scrummasters about the plusses and minuses of Agile—I’m not necessarily looking for the textbook canoncial comparison to Waterfall-style development, but rather, the times when Scrum has failed them and what they did about it.
Admittedly, this technique doesn’t really work as well with technical coding exercises, particularly ones that hinge on specific algorithms or data structures. There’s a much more constrained universe of solutions for the vast majority of coding interview questions—particularly ones that can fit within the scope of an interview—and it’s not reasonable to expect candidates to invent novel computer science under the pressure of getting a new job. This squares with our industry’s standard practice of applying leetcode-style coding questions to evaluate technical ability, and the best correlation coming from candidates a couple years removed from college or coding academies; the entire point is to find engineers who can produce textbook answers.
And eventually, engineering managers.↩