Over the past three plus years, I’ve been working with community college leaders across the country to improve student success in developmental (or remedial) courses. For nearly all of this time, we’ve been grappling together with some variation of the same central question: which practices work? How do we bring them to scale across the college and across the state?
On our quest, my team and I have been turning over every possible stone in search of the best model and path to scale. We’ve seen nearly every model imaginable: from learning communities to student success courses; technology-enabled modular and accelerated approaches; models that use intensive “wrap around” services; models that reinvent the curriculum; and models that simply put the students in college level courses and provide additional support (sometimes called “mainstreaming”).
Knowing that we’re on this pursuit, college leaders often ask: what’s working? What are the models that we should bring to scale?
My response has varied from time to time, usually informed by a recent study that I have read or an exciting model that I had just seen in action. In all cases, I was careful to use caveats that would limit the risk of over-generalization – this is a limited sample. It was only piloted with a few course sections. It may not be right for all students in all situations – but I always gave and answer. I felt a need to share what I was learning, even if it was imperfect.
More recently, my response has changed. Which model “works?” All of them. Our job should not be to choose any one or two universal approaches, but to match the right students with the right intervention for the right length of time.
If we continue to search for the unassailably better, infinitely scalable model my fear is that we will come up empty-handed. Our work should not be oriented around finding and scaling individual interventions, but to supporting colleges to actively differentiate their approach for individual students based on their individual needs and likelihood of success. Scaling a process of “differentiated intervention” – wherein we diagnose student need through a data rich process (ideally including noncognitive variables of college readiness) and “match” her with the intervention that is most likely to meet her needs – has the potential to help college leaders move beyond asking “what” they should do and enter into (infinitely more interesting) questions like “for whom?,” “for which length of time?” and “under what conditions?”
Though this has snuck up on as a “stupi-phany” for me, I’m lucky that community colleges (and some innovators in the K12 sector) have been there all along: they already segment their entering student population (into “college ready” or one of an array of “developmental” courses). They already offer an array of instructional approaches that serve specific student needs.
Our quest should be to help these leaders (a) understand student “readiness” in more nuanced ways and (b) help them make the best possible referral decision with the information that they have available. Shifting to this new focus puts new demand on a different set of tools – improved diagnostic assessments, deeper analysis of student pathways, and decision support tools for institutional leaders and faculty.
New question: If Netflix can recommend a movie I might like and Orbitz can find me the best flight, can’t colleges recommend an instructional strategy that is most likely to help me complete?
Hypothesis: Of course they can, but they need (a) help understanding student needs in more nuanced ways, (b) a portfolio of high quality interventions, (c) a sophisticated system for matching students to interventions, and (d) robust feedback loops that can help refine this “matching” process overtime.