Category Archives: Uncategorized

A Major Study of Online Learning at Public Universities – The ITHAKA Study

Earlier this week, ITHAKA released its findings from a randomized study at 6 public universities of Carnegie Mellon University’s Open Learning Initiative’s Statistics course, as delivered in a blended format. Their research found no credible differences in course completion or student learning outcomes between the students who took the face to face or blended course. A link to the full report is here.  While the Gates Foundation didn’t sponsor or fund the research, the results of this study have important implications for our the technology enabled learning and innovations work that we fund broadly via our Next Generation Models team or the Next Generation Learning Challenges partnership.  While the results of this study aren’t necessarily novel – they do emphasize once again that the development of intelligent learning technologies or digital cognitive tutors are at an early stage. While there are many caveats that even the researchers point out (i.e., this is only a study about one OLI course), this research is of tremendous importance to those of us who care about evidence based policy and change.

Some highlights:

  • The cost and level of implementation difficulty in executing this type of a randomized research experiment in higher education are high. While this experiment involved six campuses, because of complex institutional research requirements and the voluntary nature of participation in the study by students, ITHAKA was only able to put together a sample size of 605 students (total student enrollment in the course across all six campuses was over 3,000 students).  This sample size is nevertheless statistically significant.  In comparison, only 5 studies cited in the oft-cited DOE meta-analysis on online learning had sample sizes over 400 students.
  • ITHAKA worked with six public universities to create “control” and “treatment” groups at each institution to compare and measure student learning outcomes between a intro statistics course taught in the traditional face to face method vs. a course taught with OLI (coupled with a once a week face to face meeting).  The institutions that participated in the study were: CUNY Baruch College and CUNY City College, SUNY Albany, SUNY Institute of Technology, UM Baltimore County, and Towson University (also part of the University of Maryland system). While these institutions don’t represent an effective sample of all American higher ed, they do represent populations and demographics have both large commuting and residential populations, large numbers of minority students and students from low socioeconomic status families.  No community colleges were able to participate in the study despite ITHAKA’s attempts to include them.
  • Of great importance: the study found “no statistically significant differences in learning outcomes between students in the traditional  – and hybrid format sections…we can be quite confident that the ‘average’ effects were in fact close to zero.” Even when a variety of other potentially significant factors amongst various subgroups were evaluated by the researchers (i.e. student data vis a vis race/ethnicity, gender, parental education level, college GPA, number of hours worked for pay), they still found that “on average, students learned just as much in the hybrid formats as they would have had they instead taken the course in the traditional format.” Even across the various institutions they didn’t see large differences in outcomes. Instead, the research did reveal that students who took the blended course spent an average of 25% less time completing the course (so similar learning outcomes as the face to face course in 25% less time).

Assessment, aggregated demand and the slow revolution

Yesterday, I spent the day learning more about the work of the Smarter Balanced Assessment Consortia (SBAC).

From their website:

The Smarter Balanced Assessment Consortium (Smarter Balanced) is a state-led consortium working to develop next-generation assessments that accurately measure student progress toward college- and career-readiness. Smarter Balanced is one of two multistate consortia awarded funding from the U.S. Department of Education in 2010 to develop an assessment system aligned to the Common Core State Standards (CCSS) by the 2014-15 school year.

Combined with the other consortium — Partnership for Assessment of Readiness for College and Careers (PARCC) — this represents a  $330 million dollar investment in next generation assessments.

Over at Getting Past Go, Bruce Vandel has some thoughts on how this will impact postsecondary education.  While you’re there, check out the guest post from PARCC’s Allison Jones.

“The [PARCC and SBAC] assessment will determine the extent to which students have mastered the core competencies in Common Core State Standards identified by postsecondary education faculty as key to success in entry-level, credit-bearing courses in English and mathematics.  It will then signal placement into “General Education types” of English (101) and College Algebra if the student is college ready.”

This project will impact students all across this country and fundamentally change how we measure “college readiness,” but my fear is that postsecondary educators are not currently engaged at a level that they need to be.  Both consortia have robust plans in place for engagement, but more engagement can always help.

Pulse check: Completion by Design Texas

The PS team with leaders of the Texas CBD cadre

I just wrapped a day of very productive conversations with the Texas Completion by Design (CBD) cadre alongside my colleagues Suzanne Walsh and Bree Obrecht and our Director Hilary Pennington.  We were thrilled to be invited into this dialogue with representatives from each of the participating Texas CBD cadre colleges (Lone Star, Dallas, Alamo, South Texas, and El Paso) and the state policy lead, Educate Texas.  We were joined later in the meeting by Mike Collins (Jobs for the Future) – who is leading the state policy work across all four CBD states – and by Dr. Richard Rhodes (Austin Community College) who has agreed to chair the statewide advisory council for the initiative.

There is a lot to say about today’s meeting, but here are my top-level reactions:

  • Despite budgetary woes, TX colleges are thinking BIG.  Leaders in the room shared their inital thinking around pathway redesign and it became clear that this will not be another set of pilot projects that struggle to scale.  College leaders have internalized the goal of significant change for most students in their institution and their thinking is grounded in a shared belief that changes must impact a significant number of students.
  • There are clear opportunities to do more work together.  We spent a good deal of time identifying places where the colleges might work more closely together.  These ranged from building the evidence base around priorities like dual enrollment to developing “spec sheets” for the kinds of instructional and information technology solutions that could benefit the entire cadre, and pooling efforts around talent recruitment and development.  It was reassuring to see these colleges beginning to take stock of their collective assets and  searching for opportunities to do more together.  This model of collective intelligence is at the heart of the initiative.
  • Better real-time intelligence about student course taking can dispel myths, unlock solutions, and enable action.  It was clear that this is a data-informed group of leaders.  Not surprisingly, there was stong demand for higher quality, more timely data about student course-taking patterns.  Leadership described “home-grown” solutions that they have seen (Valencia, Sinclair), but they are thinking bigger than this.  Could the TX cadre work with the vendor community to develop a shared asset that would allow for more customized and real-time data on student course taking and attendance patterns?  The demand exists, but it is still unclear what this would look like.
  • The policy agenda is coming into focus.  Mike Collins (JFF) and Melissa Henderson (Educate Texas) facilitated a session with the CBD college leadership in which they worked to prioritize common areas of interest with regard to state policy.  The group has work to do to finalize this list, but it seems that they will continue dialogue around four key areas (1) transfer policy, (2) performance funding, (3) articulating program of study requirements, and (4) data quality/use.  This is a broad agenda and the group acknowledged that the key will be sequencing and communicating early and often to key stakeholders in the state.

Dr. Richard Rhodes (Austin Community College) and Michael Collins (JFF) discuss progress in Texas

While I’m up-beat about progress in Texas to date, I also realize that we did not have time to discuss some important outstanding questions: what is the magnitude of improvement (retention and completion rates) that these colleges can reasonably accomplish over the next 2-4 years?  How do we move forward with an improvement agenda when there is still a  fundamental disconnect in how colleges, members of the legislature, and the general public measure success?  How do we activate the broad base of college completion stakeholders in TX to ensure that the agenda has presence beyond this group of colleges and has the potential to truly transform postsecondary education in the state of Texas?  Given that we are unlikely to see new money coming into the system in the near future, how do colleges prioritize their efforts in a way that doesn’t sacrifice access or quality?

These questions are clearly bigger than a morning meeting and I know that they are on the minds of the TX cadre leadership.  Our hope is that we can begin to tackle these together between now and April when the cadre implementation plans are submitted.

Dispatch from the Lone Star State

I just got off the road from a couple of meetings in Texas — one of several states doing some truly innovative things in the college completion movement.  Here’s a summary of what I learned:

Texas Higher Education Coordinating Board.  My colleague Elizabeth Gonzalez and I spent an hour meeting with folks from the Coordinating Board.  We were joined by two of our partners working with us on the Completion by Design project – Melissa Henderson (Educate Texas) and Amy Welch (Lone Star College).  We covered a lot of ground and I found it to be really valuable.  Specific areas of interest included:

  • Diagnostic assessment.  THECB just announced an RFP for a statewide diagnostic assessment that will be more closely aligned with high school curricular standards (known as STAR).  This follows a trend we’ve seen in other states (VA, FL, NC), but – given the size of Texas – it represents the biggest opportunity in terms of impact on large numbers of students.  Their plan is to select an instrument this summer, pilot this fall, and move to statewide implementation in fall of 2013.  THECB staff are very knowledgeable on this topic and demonstrated incredible thoughtfulness in their RFP design.  In addition to being diagnostic and aligned with high school learning standards, they have built in plans to recalibrate the assessment on a regular basis because they know that this should be a continuous improvement endeavor rather than a “one time fix.”  Their north star is improving alignment around definitions of “college readiness,” which should help smooth out the transition from high school to college.  This seems to be the right goal and – if implemented well – could create real opportunity for large scale change.
  • Performance funding.  The Coordinating Board is currently considering various models of performance funding (including the WA “momentum funding”).  Though they are unsure what this might look like in TX, it was clear that there is an appetite to rethink how institutions are funded.  This is being driven in part by budgetary realities and in part by a desire – on the part of the Coordinating Board and some college level leaders – to better recognize colleges who do a good job of helping students persist and complete.  They are facing a couple of key design questions at the moment (should the model use base funding or incentive funding?  How can the model avoid unintended consequences like “creaming” and grade inflation?), which they will continue to trouble shoot.

 

  • Vertical alignment teams.  I was amazed to learn that THECB and TEA have been collaborating around improving instructional and programmatic coherence across K-20 through faculty led “vertical alignment teams.”  These teams (which are connected in some ways to the Pathways project) are working to make sure that expectations for student success are clearly spelled out at each step of the way.  This has been a tricky enough dance at the K12 level and I tip my hat to them for bringing this conversation to the higher education sector.  At a local level, our Communities Learning in Partnership (CLIP) and Partners for Postsecodnary Success (PPS) district and community college partners have begun similar pilot efforts through faculty led professional learning communities. Almost all sites are addressing curriculum alignment for math and English.  Furthermore, as our work in Completion by Design has demonstrated, many students do not know what courses they need to take to get the credential they want.  Projects like this one are a great starting point for answering some essential questions like (a) what courses are required to complete which programs and (b) what competencies are inside these courses?

Elizabeth and Melissa before our meeting with the Coordinating Board

UT Austin – Center for Community College Student Engagement (CCCSE).  I also had the good fortune of spending a day with Kay McClenney and her team at CCCSE as a part of a Technical Advisory Board.  This team is sitting on unreal amount of data collected through their four survey instruments (two of which are targeted at students, one at faculty and one at college leaders).  The task at hand for the advisory board was to help their team think of the best applications and topics for future inquiry.  The conversation was productive, but not without its challenges.  The CCCSE team is on the heels of unearthing what I think are three important pieces of knowledge:

  • A better taxonomy of institutional practice.  With their new institutional survey instrument, CCCSE has been able to decode the component parts of some common community college interventions (e.g. learning communities, student success courses, etc.).  Practitioners have known for a long time that there is tremendous variation across colleges in the composition of these interventions, but they have never known how these practices vary.  Thanks to CCCSE’s work, we’re close to uncovering a more precise taxonomy, which includes things like study skills, learning style assessments, and tutoring.   This can help cut across the “fads” and nicknames of community college practices to get to the heart of the matter.  Colleges can call these things what they will – CCCSE is interested in learning about the component parts and services that students are receiving.  Their team is still struggling with how best to empirically validate the specific components that they are asking colleges to identify, which will be a critical step.  To quote one participant: “the hardest part of any factor analysis is naming the factors.”
  • The relationship between institutional practice and student engagement.  The institutional survey will give CCCSE additional explanatory power when it comes to understand what colleges can do to better engage their students.  By matching survey institutional survey responses with student responses about their relative levels of engagement, they will be able to draw a stronger line between the specific institutional practices that are most closely related to student engagement (and, importantly, those that are not).

 

  • The relationship between student engagement and student outcomes.  Once these two pieces of the puzzle have been solved (no easy task!), CCCSE will be able to take on what their team sees as the biggest opportunity of all: uncovering the relationship between student engagement and actual student outcomes (e.g. course completion rates, term to term persistence, etc).  I was surprised to learn that – beyond logical assumptions –  we don’t really have any empirical evidence of a relationship between engagement and outcomes.  It’s not clear yet when this will happen or at which colleges, but their goal is clear: they want to match student survey data to transcript level student unit record data to better connect the dots between engagement and outcomes.  This has the potential to be huge value added to the research community and feed into a tool to provide feedback to college leaders.
Tagged , , , , , ,

A conversation about a conversation….


Last week I attended a symposium at MIT on the topic of quality in online learning.  In addition to visiting this amazing local museum and an opportunity to hear the talents of MIT’s oldest vocal ensemble, I had a chance to participate in a whirlwind 48 hour idea and design fest devoted to quality in online learning. Jointly organized by our colleagues in the College Ready team and  MIT’s Office of Educational Technology and Innovation, the meeting brought together about 80 individuals (grantees and non-grantees that included researchers and faculty, representatives from online learning solutions and tools, and representatives from grantee school districts and CMOs). The stated goal of the symposium was to “launch a national conversation on the quality of online learning in K-12 education.”

The meeting agenda, list of participants, and presentations have been posted online. In addition (if you’re interested) you can also view the twitter stream from the entire 2 days.The meeting was organized into a half day of presentations. Panels included a number of CRW grantees including (Arizona State University, Creative Commons) as well as representatives from industry and the broader online learning sector (Bror Saxberg from Kaplan and Judy Codding from the Pearson Foundation). The panels focused on three broad topics:

  • Immersive Gaming / Focused Tutoring
  • Quality Assessment: Assessing Course Quality
  • Scaling Quality

The symposium also brought together a student panel consisting of four high school students, two teachers from Highline School District and Bellevue School District, as well as a team of researchers led by Dr. Phillip Bell from the University of Washington’s College of Education. The student/faculty panel discussed in detail their use of Educurious in the classroom. A core highlight from this session was learning first-hand how the resource enables students to tap into a network of accomplished and seasoned professionals (as thought partners and information resources) over the course of completing particular assignments and research problems. It was also really interesting to hear how teachers felt “relieved” by having a resource like Educurious to depend on because it reduced the pressure that they felt “relieved for not having to know everything” and more importantly, were now empowered to connect students to other external resources that could provide students with the expertise and information that they needed. The students’ excitement and engagement in their learning and the process was also palpable and very cool.

In a talk titled the “Opportunity and Challenge of Pervasive Quality”,  Jim Shelton pushed us all to create more durable and lasting learning innovations that benefit our students, faster. Mr. Shelton first spoke about two recurring challenges regarding innovation in education:

  • A tendency to think learning will happen because of innovations in hardware (i.e., magical device + student = improved student learning and mastery)
  • A tendency to focus on the perfect rather than the good –  we tend to focus on “what would be great”) and swirl in circles rather than focusing on “what can be great”

While he acknowledge  the existence of weighty academic debates regarding the attributes and characteristics of a high-quality online learning experience, Mr. Shelton (ever the pragmatist) asked us to focus on the most important factor. Find, test, build, and scale learning experiences and environments that first and foremost deliver better student learning outcomes and mastery. Online learning should be about learning – not bells and whistles.

Learning. Plain and simple.

P.S. I expect in a couple of months that we will actually see the outcomes and learning from the workshop delivered back to the public for further discussion.

P.S.S. A special shout out to colleagues in our College Ready team for organizing such a great symposium.

Tagged , , , , , , , , , , , ,

Weak solutions to the wrong problem: why we should stop reforming developmental education

I am increasingly convinced that most attempts to reform developmental education are misguided and ultimately destined to produce marginal or no improvements.

This is also not a growing pessimism or an attempt to diminish the critical contribution of developmental education innovators all across the country.  This is rather a deepening of my appreciation that framing the problem matters.

Yesterday, I sat in on a call of some of the country’s leading voices in the community college reform movement – folks from Complete College America, Jobs for the Future, the Community College Research Center, the Education Comission for the States, and the Charles A. Dana Center.  Our task at the top of the call seemed simple enough: to distill the key 5-7 principles of effective community college developmental education systems.  Our motivation and the reason for the call was to develop this list as a way to provide a common answer to the question that we all are constantly bombarded with: “how do we fix the problem of developmental education?”

Developmental education is one of the most notorious bottlenecks in the P-20 pipeline (the “Bermuda triangle” as some would have it — or, more forcefully, the place “where many dreams die”) and our respective organizations have invested a lot of time, money, and energy in increasing student success in these courses.  State and institutional leaders in the field are under increasing pressure to produce more graduates — and to do so at a higher standard of quality with fewer resources.  Added to this the rising attention that developmental education has been receiving on the national ed reform platform, it seemed only logical that these national experts should be able to distill – at a high level – the common design principles of systems that are able to help low-income, first generation students succeed.  Given our shared work and points of view, this should not have been a difficult task.  Or so we thought.

As we got into the discussion we quickly discovered that solving the developmental education crisis is a false premise.  Pursuing this path would only serve to prop up the current, ineffective and inefficient system that we have today and would ultimately produce, as one participant put it, more “weak solutions to the wrong problem.”  What we discovered is that as long as developmental education remains a problem that those students have and that those (mostly adjunct) faculty members need to fix, we will not make meaningful headway on actual outcome improvement.

The sad reality is that even students who enter a community college and do not need developmental education fail at alarmingly high rates (a recent CCRC analysis conducted using data from Completion by Design colleges puts the 5-year completion figure for these students below 50% compared to less than 20% for students with high remedial need.  This includes all credentials and yes – the analysis even uses NSC data to account for students who transfer to another college).

So what is the path forward?  If solving the developmental education crisis is the wrong premise, what’s the right one?  This is emergent thinking for our group, but I think we’re on our way.

At the foundation, we’ve been talking about “early momentum to a credential” as a way to frame the task at hand less around those courses and more around helping students make measurable progress to the ultimate goal of credential completion in a reasonable amount of time.  CCRC’s recent work emphasizes enrollment in a program of study as the new standard for success in the first year of community college (and other work by Judith Scott-Clayton importantly questions whether we should in fact consider community colleges “open access” institutions when so many students are forced to take non-college courses upon entry).  ECS, Jobs for the Future, and Complete College America frequently describe the need to “blow up” developmental education and help colleges recosider the questionable distinctions that colleges make between “developmental” and “college ready” students.  In partnership with the Carnegie Foundation for the Advancement of Teaching, the Dana Center is questioning the curricular integrity and applicability of pre-college programs altogether.

Amidst these points of view is a path forward.  I’m optimisitc that – with continued hard work and collaboration – we will find it.

Where are the community college “sense-makers”?

Community college leaders are under tremendous pressure to improve the rates at which students complete credentials – and to do so in less time, with fewer resources.  The mission of the community college further dictates that they must accomplish this goal without sacrificing quality or access.  Further pressure is applied when viewing the work of the community college through a social justice or economic competitiveness lens, which maintains that the country needs improved completion – in less time, with fewer resources, without sacrificing quality or access – while simultaneously eliminating gaps in outcomes between low-income students and their more affluent peers.

Put differently, the completion agenda makes “access for all” look easy.

Fortunately, community colleges are, by and large, up to the challenge.  For the last three years, my team has been working with completion-focused college leaders across the country who are moving mountains to redesign student pathways focused on improved completion (see for example the Completion by Design project).  Through these conversations, we’ve begun to uncover patterns in the challenges that community colleges are up against.  Among other things, college leaders often reference redesigning developmental education, integrating technology to improve instruction and operations, and providing adequate amounts of effective faculty development as key areas of concern.  While I am sometimes discouraged by the scope of this “list of worry,” I am encouraged to realize that these problems tend to cluster into a manageable group.  And, more importantly, the problems are solvable.

My team is doing what we can to support the colleges that we can, but our resources are limited and the philanthropic sector is (depending on who you ask) designed to provide resources to support upfront innovation — not ongoing operation.  While I believe in the work that we are doing, in back of my mind I can’t help but think that there’s a more fundamental conversation that we need to engage: what is the sustainable model for supporting community colleges improvement efforts over the long haul? 

There are typically two answers to this question: (1) government or (2) the market.

On the government side, I can see few reasons for hope in the near term.  State budgets are declining and belts are tightening around higher education spending in near unprecedented levels (along with healthcare and K12 education).  As a result, state higher education agencies – which represent one potential avenue for the sustainable provision of support to colleges in their state – are facing reductions in force, which greatly limit their ability to help colleges do things like report and analyze data and tap into the national knowledge base of “best practices” around models (in NC, 19 positions were recently cut from the state community college system office).

This leaves the market.  Here we find a wide range of for-profit and nonprofit service providers that offer a range of services to community colleges.  Generally speaking, the relative health of this market is a big unknown to us.  We don’t have, for example, any systematic source of information that can tell us:

  • The specific areas (e.g. data use) in which community colleges are currently seeking support from outside service providers
  • The availability and uptake of services in these areas
  • The relative quality and affordability of services in these areas

Without this knowledge it is hard to know if community colleges will be able to meet the demands of the completion challenge.

So where are the “sense makers” that can help community colleges as they work to address key barriers to improved completion?  Over the last few months, we’ve been partnering with a team from FSG Social Impact Consultants to begin to wrap our heads around the current landscape of service providers in the community college sector.  Their team administered an original survey to all community colleges in the country and spoke one-on-one to over 40 leaders from community colleges and support organizations from across the country.  I’ll share some highlights from their findings here over the next month or so.  You can bet that some of this will surprise you.

Differentiated intervention: a recommender approach to supporting “early momentum”

Over the past three plus years, I’ve been working with community college leaders across the country to improve student success in developmental (or remedial) courses.  For nearly all of this time, we’ve been grappling together with some variation of the same central question: which practices work?  How do we bring them to scale across the college and across the state? 

On our quest, my team and I have been turning over every possible stone in search of the best model and path to scale.  We’ve seen nearly every model imaginable: from learning communities to student success courses; technology-enabled modular and accelerated approaches; models that use intensive “wrap around” services; models that reinvent the curriculum; and models that simply put the students in college level courses and provide additional support (sometimes called “mainstreaming”).

Knowing that we’re on this pursuit, college leaders often ask: what’s working?  What are the models that we should bring to scale?

My response has varied from time to time, usually informed by a recent study that I have read or an exciting model that I had just seen in action.  In all cases, I was careful to use caveats that would limit the risk of over-generalization –  this is a limited sample.  It was only piloted with a few course sections.  It may not be right for all students in all situations – but I always gave and answer. I felt a need to share what I was learning, even if it was imperfect.

More recently, my response has changed.  Which model “works?”  All of them.  Our job should not be to choose any one or two universal approaches, but to match the right students with the right intervention for the right length of time.

If we continue to search for the unassailably better, infinitely scalable model my fear is that we will come up empty-handed.  Our work should not be oriented around finding and scaling individual interventions, but to supporting colleges to actively differentiate their approach for individual students based on their individual needs and likelihood of success.  Scaling a process of “differentiated intervention” – wherein we diagnose student need through a data rich process (ideally including noncognitive variables of college readiness) and “match” her with the intervention that is most likely to meet her needs – has the potential to help college leaders move beyond asking “what” they should do and enter into (infinitely more interesting) questions like “for whom?,” “for which length of time?” and “under what conditions?”

Though this has snuck up on as a “stupi-phany” for me, I’m lucky that community colleges (and some innovators in the K12 sector) have been there all along: they already segment their entering student population (into “college ready” or one of an array of “developmental” courses).  They already offer an array of instructional approaches that serve specific student needs.

Our quest should be to help these leaders (a) understand student “readiness” in more nuanced ways and (b) help them make the best possible referral decision with the information that they have available.  Shifting to this new focus puts new demand on a different set of tools – improved diagnostic assessments, deeper analysis of student pathways, and decision support tools for institutional leaders and faculty.

New question: If Netflix can recommend a movie I might like and Orbitz can find me the best flight, can’t colleges recommend an instructional strategy that is most likely to help me complete?

Hypothesis: Of course they can, but they need (a) help understanding student needs in more nuanced ways, (b) a portfolio of high quality interventions, (c) a sophisticated system for matching students to interventions, and (d) robust feedback loops that can help refine this “matching” process overtime.

College placement and the “illusion of validity”

The New York Times recently ran an excellent piece by Daniel Kahneman in the Sunday Magazine.

The piece deals with the “illusion of validity” – the human tendency to have confidence in judgments that have little or no statistical validity.

Kahneman uses confidence in mutual funds as a way to illustrate this tendency:

“Mutual funds are run by highly experienced and hard-working professionals who buy and sell stocks to achieve the best possible results for their clients. Nevertheless, the evidence from more than 50 years of research is conclusive: for a large majority of fund managers, the selection of stocks is more like rolling dice than like playing poker. At least two out of every three mutual funds underperform the overall market in any given year.”

I read this piece with some pretty serious implications for work to improve college completion.

Specifically, I think we can apply this phenomenon to college placement assessments.  These tests – which have been shown to have little or no predictive validity – are consistently used in high stakes ways in our nation’s community colleges.  Entering students are often unaware that these assessments even exist, which puts them at a defecit when they sit to take the test.  This wouldn’t be so damaging if colleges considered other measures of “college readiness” (e.g. high school GPA, high school test scores) when making their placement decision.  But in most cases they don’t.  They tend to make their decisions based on the test alone. As Susan Headden details in her recent article in Washington Monthly College Guide, this sends mixed signals to students.

“Most Americans think of the SAT as the ultimate high-stakes college admissions test, but the Accuplacer has more real claim to the title…When students apply to selective colleges, they’re evaluated based on high school transcripts, extracurricular pursuits, teacher recommendations, and other factors alongside their SAT scores. In open admissions colleges, placement tests typically trump everything else. If you bomb the SAT, the worst thing that can happen is you can’t go to the college of your choice. If you bomb the Accuplacer, you effectively can’t go to college at all.”

We know that when students enter into developmental education their odds of making it out are incredilbly low (see for example here and here) so if we want to improve completion rates, our goal should be to place as few students as possible into these courses.  It would follow, then, college placement tests with low predictive validity ought not be used in such a high stakes fashion.

So why does this behavior persist?

There are at least two clear culprits.  The first is the instruments themselves.  In measuring academic ability alone, these tests fail to account for the non-academic factors (i.e. tenacity, grit) that research suggests share a relationship with student success (see David Yeager’s summary of “productive persistence” here).  The second is the cost associated with getting more data.  Forthcoming reseach by Judith Scott-Clayton has demonstrated through that using a test score and high school GPA in the placement process is more useful than a test score alone.  But it costs money to collect these data and this resource constraint makes it rare for communtiy colleges to collect things like high school transcripts.

While we could think of ways to solve these problems (better assessment instruments, use of more data in the college placement process decision), I think Kahneman’s piece suggests a more fundamental flaw in how we think about “readiness” in the first place.  Until we can confront the “illusion of validity” that exists in any assessment process, we’re not likely to uncover lessons that can lead to breakthrough solutions.

Looking under the hood of student engagement with CCCSE

TRIP REPORT: Center for Community College Student Engagement (CCCSE)

I recently visited with Kay McClenney and her team at the Center for Community College Student Engagement (CCCSE). 

Some things you might not know about CCCSE:

  • Located at UT-Austin in the Community College Leadership Program
  • Employs 28 people all working on survey administration and/or technical assistance to help colleges digest and respond to survey findings
  • Administers four surveys each year (2 student surveys, 1 faculty survey, and 1 for college leaders)
  • Have surveyed over 2M students in 819 colleges in 49 states since 2002
  • Makes “IR in a box” services available to all participating colleges

We are working with CCCSE to help them enhance their annual surveys by adding a special item module that will help us (a) better understand which specific practices individual colleges are deploying to try and improve student engagement and (b) better understand the link between student engagement and improvements in student outcomes.

Some things you might not know about this project:

  • Jointly funded with the Lumina Foundation
  • Designed to help organize and map diffusion of the various practices that colleges are employing to improve student engagement
  • Will formulate a view of “high impact” practices from a composite of special item modules embedded in all four CCCSE surveys
  • Will include a matching of survey responses to student unit record data in a subset of colleges (still TBD)
  • Includes a technical advisory panel that will fine tune survey design

The meeting was a great opportunity to get better acquainted with CCCSE’s organizational structure, mission, and vision.  Over the long term, they want to build out their institutes (perhaps taking a regional approach) that can serve as annual opportunities for community college leaders who are serious about using data to redesign a student’s first year experience (i.e. help them gain “early momentum”).

The meeting left me thinking about all of the intersections that CCCSE’s work might have with the Completion by Design project.

Other items of note:

  • Byron McClenny joined us for the last hour and gave an update on a powerrful project that he is leading for the Houston Endowment.  This is a project designed to improve the transition from high school to college in the Houston area.  The Institute for Evidence Based Change (national Cal-PASS model) is providing data support for this work.
  • Both Kay and Byron encouraged us to lengage trustees as agents of change in the completion movement.  The unique positioning of trustees creates opportunities “down” to presidents seeking support to pursue completion agendas and “up” into the state policy infrastructure.

For more about CCCSE, check out the 2010 CCCSE National Report, The  Heart of Student Success: Teaching, Learning, and College Completion.

Tagged , , , ,