An interesting case of bias
One of the subjects that seems to raise people’s hackles is how to conduct a technical interview. I recall being subjected to
personal attacks because I happen to like asking senior candidates a
design problem as
part of the process.
Oh, I’m a terrific software architect if you want a three tier, MVC business rules-driven application where I have days or weeks to suss out requirements and fully investigate the options. But my performance designing a distributed email server, or a game, or anything else in just a few hours does not reflect my performance under actual job conditions.
I’m not going to argue the point here. But I have observed a very, very common theme to the arguments against solving puzzles, writing code, and designing ‘toy’ systems in an interview (let’s call all of these activities “
juggling”):
in almost every comment I’ve read objecting to asking a candidate to juggle, the poster admits that they dislike juggling, or are poor at it under the time and pressure constraints of an interview.
The argument against testing is cloaked in terms of “Oh, I’m a terrific software architect if you want a three tier, MVC business rules-driven application where I have days or weeks to suss out requirements and fully investigate the options. But my performance designing a distributed email server, or a game, or anything else in just a few hours does not reflect my performance under actual job conditions.”
The premise is since the candidate thinks they are good, and also thinks they would do poorly on the test, the test must be invalid.
I’m not saying they’re wrong. I just find it interesting how few people make the claim that they are incredibly skilled at ‘juggling’ in technical interviews but do not think the results are particularly significant.
update: Do you dislike juggling but you’d like to work for the company anyways?
Take control of your interview.