The New York Times recently ran an excellent piece by Daniel Kahneman in the Sunday Magazine.
The piece deals with the “illusion of validity” – the human tendency to have confidence in judgments that have little or no statistical validity.
Kahneman uses confidence in mutual funds as a way to illustrate this tendency:
“Mutual funds are run by highly experienced and hard-working professionals who buy and sell stocks to achieve the best possible results for their clients. Nevertheless, the evidence from more than 50 years of research is conclusive: for a large majority of fund managers, the selection of stocks is more like rolling dice than like playing poker. At least two out of every three mutual funds underperform the overall market in any given year.”
I read this piece with some pretty serious implications for work to improve college completion.
Specifically, I think we can apply this phenomenon to college placement assessments. These tests – which have been shown to have little or no predictive validity – are consistently used in high stakes ways in our nation’s community colleges. Entering students are often unaware that these assessments even exist, which puts them at a defecit when they sit to take the test. This wouldn’t be so damaging if colleges considered other measures of “college readiness” (e.g. high school GPA, high school test scores) when making their placement decision. But in most cases they don’t. They tend to make their decisions based on the test alone. As Susan Headden details in her recent article in Washington Monthly College Guide, this sends mixed signals to students.
“Most Americans think of the SAT as the ultimate high-stakes college admissions test, but the Accuplacer has more real claim to the title…When students apply to selective colleges, they’re evaluated based on high school transcripts, extracurricular pursuits, teacher recommendations, and other factors alongside their SAT scores. In open admissions colleges, placement tests typically trump everything else. If you bomb the SAT, the worst thing that can happen is you can’t go to the college of your choice. If you bomb the Accuplacer, you effectively can’t go to college at all.”
We know that when students enter into developmental education their odds of making it out are incredilbly low (see for example here and here) so if we want to improve completion rates, our goal should be to place as few students as possible into these courses. It would follow, then, college placement tests with low predictive validity ought not be used in such a high stakes fashion.
So why does this behavior persist?
There are at least two clear culprits. The first is the instruments themselves. In measuring academic ability alone, these tests fail to account for the non-academic factors (i.e. tenacity, grit) that research suggests share a relationship with student success (see David Yeager’s summary of “productive persistence” here). The second is the cost associated with getting more data. Forthcoming reseach by Judith Scott-Clayton has demonstrated through that using a test score and high school GPA in the placement process is more useful than a test score alone. But it costs money to collect these data and this resource constraint makes it rare for communtiy colleges to collect things like high school transcripts.
While we could think of ways to solve these problems (better assessment instruments, use of more data in the college placement process decision), I think Kahneman’s piece suggests a more fundamental flaw in how we think about “readiness” in the first place. Until we can confront the “illusion of validity” that exists in any assessment process, we’re not likely to uncover lessons that can lead to breakthrough solutions.