In his book, Thinking, Fast and Slow, Daniel Kahneman returns several times to a formative experience he had in the Israeli army. Assigned to a unit responsible for assigning fresh recruits to approriate units, he soon discovered that the interviews that he and his colleagues conducted with these recruits were, approximately, useless. The interviews yielded secure impressions that over time turned out to be worthless for predicting the actual future fit and performance of recruits in their assigned units. So Kahneman was charged with changing the system. He retained the interviews, but developed an objective checklist of features and characteristics that were then processed through an algorithm. The algorithm performed well in predicting future success, certainly far better than the subjective judgments reached in interviews.
Algorithms and their cousins, checklists, are hot now. Through the work of Kahneman and others, we have come to increasingly recognize our own human limits. We turn out to be predisposed, probably biologically, to overvalue our own, frequently biased, judgments. In some cases, as in a critical surgery or on a crippled spaceship, literally life and death may hang on the balance of a human judgement. Checklists increase the possibility for better judgement calls, and these are call we would all prefer to get right.
In my own professional world, we never make judgements of life and death. We do, though, make judgements that have significant impacts on people’s lives, as well as on the climate of institutions and our own professional advancement. We admit graduate students, hire faculty, and vote on their tenure and promotion. They are not life and death decisions, but they are not unimportant either.
As I read Kahneman’s book, which deals explicitly with issues of hiring, I could not help but think of how we hire faculty. I have taught at several institutions now in which I’ve participated in these decisions, and the process at each was similar. We individually read the materials and recommendation letters and reach preliminary judgements about: fit; interest and viability of research; scholarly achievement (or potential for this); potential to contribute to our curriculum; and “collegiality.” We then have a very unsystematic discussion of the candidates to arrive at a shortlist. Candidates visit for a day or two, we chat with them and hear them speak, and then we have another unsystematic discussion that (usually) yields a decision.
Does this process work? On the one hand, there are very few hires in which I have been involved that in retrospect I regret. Yet on the other hand, the unsystematic way we make these decisions, and what comes across in the meetings as clear biases based on what Kahneman calls the “halo effect” (i.e., because we like somebody for one reason we assign them higher ratings in other areas), make me uneasy. A spectacular lecture or charming interview might unduly help a weaker candidate, or vice-versa. The question this raises is whether academic hiring, like the other kinds of hiring Kahneman discusses, would produce better decisions based on checklists and algorithms.
Kahneman discusses multiple cases in which algorithm-driven hiring was implemented, and each case met with fierce resistence on the part of those who have control over the hiring but better overall results. Intuitively, I too cringe at the notion of hiring a colleague based on a checklist. But I wonder if it would make me cringe more than the messy, often subjective, decision making process we already practice. And I wonder, ultimately, about whether it would work out for the best, and how we might track and know that.
This rumination does in any case lead to (what is for me) an interesting consideration of the still-hypothetical checklist itself: What would it look like, and what algorithm would process it? I might pursue this more in another post, but in the meantime would be interested in hearing from others whether the entire idea of acacdemic hiring in this manner is crazy.