The Skeptical Methodologist

Software, Rants and Management

When Counting from 100 to 1, Interview Candidates will do Precisely as Well as You Think They Should

Can you write a program that prints 100 to 1? Apparently, some are claiming such a program can be as valuable as Fizz Buzz in determining the value of interview candidates. Some people can’t solve this incredibly simple problem…

Wait, bait and switch time. I only told you about the easy part of the problem, not the hard part – now that you’ve already clicked on my article, I’ll go ahead and fill you in the ‘tricky’ constraint that any solution you have must start with:

for(int i = 0; …

This isn’t a programming challenge, it is now a brain teaser. Why? Because you’ve taken away the obvious answer and for no really good reason added an additional constraint. Brain teasers aren’t bad, they’re just tests of insight, not expertise. And insight is notoriously difficult to reach when you’re under pressure in an interview.

The main issue I have with this line of thinking isn’t the reemergence of brain teasers, it’s the author’s implication that programmers need to knuckle down on the hard practice of programming and put their egos aside. It seems far more likely the case that the author needs to knuckle down on the hard practice of Industrial Psychology and put his or her ego (I couldn’t gender check since the page was failing to load due to traffic) aside.

Despite the warnings that 22 is not a large enough sample size to get any significant result out of, the author goes ahead and does it anyway. If the rest of their book is written with such rigor and you’re interested, I advise you to buy my own book I put together in a few weeks after learning the graph function in Excel.

But the ‘hard’ statistics isn’t even the worst part of drawing conclusions from this ‘study’ – the ‘soft’ part is where the author utterly failed.

One data point that I obtained for the book (but didn’t quite include in the book because it was too programmer centric) was based on 22 job interviews for programming positions I conducted for one of my clients over a period of two months.

The author claims two questions were asked to test the hypothesis of whether or not what they very scientifically call ‘whining’ can predict what they’re claiming to be programming ability. Did you see the flaw?

Unless I’m reading the blog wrong – and I could be – the author him or herself asked both questions, with hypothesis in mind, most likely in the order implied: whining then programming ability. This removes what, you know, experts in statistics and survey design would call ‘blinding’. It means the author’s own implicit bias going into each interview could possibly skew the result. To sum up, the author could very well have badgered every candidate during the programming test that whined more than a few minutes or they could have stayed silent. With the study designed as it was, we wouldn’t know the difference.

What’s a much better conclusion from this statistically insignificant result? Candidates are going to do precisely as well as you think they ought to. Specifically, they’ll do exactly as well as you want them to on brain teaser type problems that require insight. This is why you need structured, repeatable tests that measure insofar as possible expertise, not insight. Insight is important, but practically impossible to measure under the pressure of an interview when the candidate is going to be analyzing your every subconscious twitch to see whether they’re getting the job or not.

April 10, 2015 Posted by | Uncategorized | Leave a comment