Interviewing for Signals

I’ve been helping out with interviews for my employers since my first gig out of college, some 15 years ago. In that timeframe, the process to get a job as a software engineer hasn’t changed too drastically; there’s still a strong sense of testing for technical ability that’s many degrees removed from actual day-to-day engineering work.

Whereas the early 2000s were rife with pointless riddles—which did as much to stroke the ego of the interviewer than actually evaluate talent1, nowadays the standard seems to be toy coding questions—a la leetcode—which seems more to test the ability to study and regurgitate a bunch of formulaic algorithms. Which means, in some sense, interviews have sucked for a good long time.

To be fair, there have been incremental improvement in how interviews seem to be conducted for engineering roles. At the very least, there have been formal studies on the efficacy of “hard” interview questions, and from those results people have been discouraged from continuing the practice. The dreaded and much-ridiculed whiteboard interview is at least being questioned, and alternatives like take-home projects or shared coding environments have led the technological evolution of evaluating coding ability2.

There is, however, much less material around how to interview, either online or anecdotally, even within companies themselves. I suspect it’s a combination of the uniqueness of each organization’s interview processes and expectations, along with the subjective nature of candidate evaluation. There may be a “right” answer to a coding problem or a whiteboard systems design question; it’ll inevitably be tilted—sometimes incredibly—by the individual interviewer and their internalized grading rubric.

So I guess this is my tactical rubric.

It actually makes some sense if derived from first principles; namely, that interviews are meant to be a microcosm of the work itself. It follows then that the best way to evaluate a candidate’s effectiveness is via simulating as much of the work environment as possible, with certain constraints around the length of time in evaluation3 and thus possible problems to solve. That is, interviewers end up having to ask toy problems that take 30–40 minutes to answer and try to determine whether the performance is representative of on-the-job behaviors.

Given that this is the setup, my aim in an interview is to try to gather as many signals as possible from the candidate. Signals, in this context, are necessarily reductive nuggets of behavior that correlate to traits for success in the role; they’re bits and pieces of detail that speak to underlying strengths. Since interviews are supposed to be distillations of the work itself, signals are the analog to a full-blown evaluation of those skills needed for the work4.

Obviously there are a wide range of behavioral signals, but there’s a good number of technical ones as well. Some of the ones I like:

  • Fluency with some development environment: IDEs/editors, debuggers, terminal, etc.
  • Ability to listen, clarify and dive into a particular detail if needed
  • Understanding of adjacent “layers” of the technology/product stack (e.g., front-end engs figuring out APIs, back-end engs learning about how their code compiles, designers knowing what’s possible with CSS layouts)
  • Riffing on new information, being able to draw parallels to own domains of expertise to bootstrap into new areas
  • Willingness to step back and connect a line from a high-level requirement to the technical implementation (and questioning its necessity in the process)
  • Appreciation for business and technological developments, particularly outside of own domain

This is not meant to be a checklist or any sort of scoring system, nor is it a discrete set of hiring manager questions to ask a candidate. Rather, these are more nuanced signals that should be gleaned as a part of the overall conversation—ideally unprompted by the interviewer—that boosts an interview performance. Their presence doesn’t rescue an otherwise poor technical/behavioral interview, but given multiple candidates who do similarly well in their feedback, I’m inclined to go for the candidate who exhibits more of these sorts of behaviors.

A similar system can be applied to the core interview as well, and in fact experienced interviewers tend to insert a level of leniency and subjectiveness which hones on precisely these types of signals. I’ve read my fair share of interview feedback which note that while the candidate technically failed to code up a working solution, they got close enough to an answer, plus all the ways they’ve interacted positively and shown some of these traits above, that they’re very very close to passing and should be offered the job contingent on similar levels of performance in other interviews. Indeed, that squares well with the actual work: there are hardly ever any “right” answers to real-world problems, and minor weaknesses that can be overcome in a collaborative work environment (e.g., in code and design reviews) prove insignificant, especially when the candidate can offer strengths to augment the team.


  1. This was a time before Google was known for an astronomically high hiring bar, so naturally these types of questions were attributed to Microsoft.

  2. We did pairing interviews fairly early on at Square; I wrote about it!

  3. Some companies have pioneered the idea of paying someone to work for 1–2 weeks on a meatier problem as the interview. Two issues: most employed candidates can’t take that much time off for an interview elsewhere, and the disparate nature of the problem sets makes it difficult to calibrate across candidates.

  4. To be fair, that evaluation usually comes about at a job in the form of performance reviews, but it is telling that even there it’s extremely difficult to evaluate accurately even when the full set of work and outputs are readily accessible.

Share this article
Shareable URL
Prev Post

Review: Brief Answers to the Big Questions

Next Post

Piles of Past Work Notes

Read next