Everyone in higher education knows that the best way to measure student success is first-year retention. Or is it?
First-year retention (FYR) is a federally defined metric measuring the percentage of first-time, full-time students who return to the same institution for a second fall. It’s meant to allow for apple-to-apples comparisons between different institutions or to show longitudinal progress at a single institution. You might also see it used to compare subgroups of students.
FYR has become the gold standard for tracking and measuring student success, but it comes with some key weaknesses. FYR shows you just a narrow slice of students at one point in time, so you’ll want to complement it with other metrics that look at more students or that follow a single cohort over a longer span of time, up to graduation and beyond. Here are some of the most common alternatives to FYR.
Metrics that include more students
Because FYR only tracks first-time, full-time students, it gives you no information about the other 83% of students who are sophomores, juniors, seniors, transfers, or part-time starters.
To get a broader picture of student success at your institution, here are two alternatives to FYR you can use:
Next-term persistence is increasingly preferred by two-year schools and some four-year schools who serve large numbers of transfers and part-time students that would otherwise be missed by FYR. Most often, this metric simply tracks what percentage of all students return for another term (excluding those who graduate). Like FYR, it can be applied to whole institutions or to subgroups.
Next-term registration is similar to persistence, but updates in real time as students register for courses. This is an important distinction for institutions that closely track the effectiveness of their outreach efforts aimed at clearing registration barriers. Registrations can also be used to predict upcoming retention and graduation rates by comparing current numbers to the same day or week in prior years.
Metrics that follow one cohort over a longer time
Another weakness of the first-year retention rate is that it doesn’t do a great job at predicting the graduation rate of the same cohort. An EAB data analysis found that there is positive, but very weak correlation between FYR improvement and improvement in the six-year graduation rate at four-year institutions.
On top of that, the six-year graduation rate (or three-year at the two-year schools) metric also has weaknesses. It measures the same first-time, full-time cohort as the FYR metric, so it misses a lot of students. It also doesn’t tell us much about time to degree. Here are some alternatives to consider instead:
Two-year/four-year graduation rate tracks the percent of first-time, full-time students who complete in “normal” time, and thus is often used by schools with traditional-aged and full-time student bodies. Comparing these rates against the three-year/six-year graduation rate for the same cohort can indicate how good a school is at graduating students on time. This comparison loses power at schools with large numbers of transfer and part-time enrollees.
Time to degree is often used to provide more nuance to a discussion of on-time graduation. It recognizes that a school can make important improvements to graduation timelines without seeing much movement in the on-time graduation metric. This metric is widely used, especially at schools that serve a large part-time population.
Credits to degree is closely related to time to degree, but specifically looks at degree efficiency. Students who take more than the minimum number of credits (typically 120 credits for most programs at four-year schools) paid for courses that they didn’t ultimately need. Schools often look this metric when they are exploring ways to make college more affordable.
Degree conferrals is another graduation-related metric. It is often used as an inclusive graduation metric that captures all populations of students. Degree conferrals can also be used to estimate the impact that an institution is having on the workforce by way of producing newly minted graduates.
Metrics that show postgraduate outcomes
You can measure how many graduates you produce, but those numbers won’t tell you how good your school really is at preparing your graduates for their careers. Postgraduate outcomes are an increasingly important part of the student success conversation.
Unfortunately, there is little consensus on the best way to measure this. Analyses of graduates’ earnings are biased by geographies, academic fields, and the prestige of the school. Alumni satisfaction surveys or giving rates may hint at quality of outcome, but also are influenced by a whole host of other confounding variables.
Underemployment of recent grads is perhaps the best metric to assess the immediate value of a college degree. A degree should help a graduate find a better job than they could have gotten with just a high school diploma. If it doesn’t, then this is a good indication that the degree is somehow out of alignment with the needs of the job market, either because of a mismatch or because students aren’t learning the skills they need to grab an advantage.
Underemployment can be measured in several ways. We are fans of the definition used by the New York Fed: If a graduate aged 27 or younger is working a job in which the majority of occupants do not have a degree, then that graduate is underemployed. This provides a nice snapshot of the value young alumni are getting from their college diploma.
Which of these metrics are right for your school? That depends on what you’re hoping to achieve. For more information about choosing student success goals, read our blog post on how to design measurable interventions to improve student success.
“Measuring student success” is now a top search query on EAB’s website. It isn’t a huge surprise why: Student success leaders want to link measurable retention gains and GPA improvements to specific interventions and initiatives. They want to know what works and what doesn’t, so they can refine and improve over time. Positive results also…
At the beginning of every year, our research team polls Student Success Collaborative members to understand the top challenges leaders face and to solicit input on our student success research agenda for the coming twelve months. This process is a fascinating look into what’s keeping our members up at night, and last year, I was…