Skip to content

Bring the noise

Violating admissions criteria can help ensure they are effective

Our academic institutions and their programs of study are systems like any other: they accept some kind of input, and transform those inputs into some kind of output. In this case, the inputs are applicants, and the outputs are people who either graduate with some overall grade, drop out, or are kicked out.

Admissions criteria pretend to filter out those who are deemed a priori not to have the capacities to complete a program of study. The criteria might seem reasonable on their face, but do they actually select correctly?

How did admissions criteria arise? Faced with limited capacity, academic institutions developed criteria to select applicants to whom places in programs of study would be awarded. These criteria, however, are arbitrary. In almost all cases, admission to post-secondary education (PSE) programs depends most heavily on final secondary school grades. However, study after study re-confirms that secondary school grades and standardized testing such as the SATs are little better than a flip of a coin at predicting PSE success.

There are two very important implications, then, of using grades and test scores to regulate admissions. First, students with low test scores might be prejudicially barred access to PSE. Second, given the repeatedly demonstrated fact that better standardized test scores, and to a lesser extent school grades, are correlated to socio-economic status, using these as selection criteria shifts these little better than random selection criteria from the general pool of applicants into the wealthy pool. The same socio-economic status differentiation likely applies to admission criteria such as interviews, where the well coached and “put together” outperform their less polished, “DIY” counterparts.

How then might administrators come to sensible and less arbitrary admissions decisions? The answer might be fairly simple: intentionally and systematically violate each admissions criterion for a percentage of admitted applicants. The idea is to see if those that don’t meet a criterion might succeed were they nevertheless admitted. There are two possible outcomes. An admissions criterion might turn out to be virtually useless. Alternatively, we might better quantify how individual criteria should factor into admission decisions – that is, how good or bad of a predictor of “success” the criterion is, and what impact it should have on applicant ranking.

This idea was tested in the sixties and seventies at Williams College in Massachusetts. Over a ten-year double-blind experiment, ten per cent of admissions were drawn from applicants that failed to meet the college’s grade and test score criteria. The result: 71 per cent of those admitted in flagrante still graduated, down only 14 per cent from the college’s overall average. Seems like they were very much onto something.

As it turns out, this principle is used in an engineering field called system identification: when faced with a “black box” – a system whose internal workings you can’t figure out – you can still develop accurate predictions of how it will respond to inputs by injecting it with random inputs (a.k.a. “noise”) and seeing how it responds. In fact, applying this technique to what are thought to be well-characterized systems sometimes elicits output from a system that theretofore you had no idea was possible.

Why not treat our PSE system with similar analytical rigour? One might argue that it isn’t fair to exclude in the name of social experimentation a few people from their “rightful” spot based on current admissions criteria. But if the criteria are flawed, then they may have been unfairly excluding others from a prized gift for decades, and that situation is what the evidence points to.