Skip navigation
Blog

How to measure student success

March 15, 2019

“Measuring student success” is now a top search query on EAB’s website.

It isn’t a huge surprise why: Student success leaders want to link measurable retention gains and GPA improvements to specific interventions and initiatives. They want to know what works and what doesn’t, so they can refine and improve over time. Positive results also help justify their departments’ work and make the case for additional resources.

But in the overall picture of a university’s student success activities, it can be difficult to isolate the impact of individual initiatives. If you improve your graduation rate, how much resulted from the advising campaigns you ran, and how much was due to a revised admissions requirement? How can you separate your office’s impact from the impact of outreach that the first-year experience office was sending? Amidst the flurry of student success initiatives taking place on campus, it can be difficult to separate and define the impact of each.

While there are always some external factors you can’t control for, you can design student success interventions strategically to make it possible to attribute certain outcomes to them. Last fall, I co-led a session at EAB’s student success summit, CONNECTED, to teach just that. Here are four steps to get started:

1. Identify a student population you want to focus on

While there are some student success initiatives that reach the entire student population, you should make identifying a specific student population your first step to improve measurement. Narrowing the scope of your efforts makes it more likely that your intervention will be meaningful and make its impact easier to isolate.

Be mindful of sample size. Make sure your pilot population is large enough to draw meaningful conclusions. You can access a free, online sample size calculator from a variety of websites.

East Tennessee State University (ETSU) approaches this by asking each college to select a focus population for targeted campaigns. Each college is asked, “Which students do you want to reach? Where do you want to see an impact?” For example, the College of Business and Technology decided to focus on sophomore students in good academic standing who had not yet declared a major. They sent an email campaign using EAB’s student success management system, Navigate, asking these students to meet with their advisor to develop a graduation plan. After the campaign was sent, the college saw a 10.5% persistence improvement for second- to third-year students, preserving $13,000 in tuition revenue.

$13,000

tuition revenue at ETSU saved due to campaign leading to 10.5% persistence improvement

Navigate users can create watch lists for the student population they define, and administrators and staff can analyze progress and outcomes for all watch lists within the Intervention Effectiveness tool.

2. Develop a Theory of Change

Theory of Change is a methodology that asks designers of an intervention (such as a student success initiative) to think through why an intervention will bring about desired results. To develop a theory of change, you need to map backwards from your long-term goals (for example, shorten time-to-degree), determine the necessary pre-conditions for that result (i.e. increase the number of credits students attempt per term), and describe how the intervention you designed would produce that change accordingly (launch a 15-to-Finish campaign to persuade students to take higher credit loads).

I would argue that this is the most important step in designing your intervention. Without a theory of change, a lot of positive (or negative) outcomes can be misattributed to your intervention. For example, just because a student receives a tutoring intervention does not mean they’ll refile their FAFSA and continue to afford paying for school.

Download the related infographic

ETSU’s leadership ensures that each department articulates this theory of change by submitting a statement of purpose with their proposed campaign. The College of Business and Technology realized that sophomores who remained undeclared into their third year wouldn’t be able to select upper-level courses that fulfill major requirements, ultimately delaying their path to graduation. They determined that the intervention needed to “identify and meet with at-risk pre-business majors (those with a GPA equal to 2.0-2.4 and 30-75 credits) to develop an academic plan.”

When I introduced this concept during the CONNECTED workshop, the number one thing I heard from our participants was “I wish we’d thought about doing this years ago.”

3. Assign process and outcomes metrics

Based on the theory of change you’ve articulated for your student population, you can determine what intermediate process metrics and ultimate outcomes you’ll be measuring. The metrics should be SMART—Specific, Measurable, Actionable, Relevant, and Timely.

A process metric is a metric that you can measure in real-time, or at least during the term, that is aligned with an intermediate outcome of your intervention strategy. Process metrics helps you determine if your intervention is contributing to your overall student success goals, which you measure with outcome metrics. Because outcome metrics tend to only update at the end of the term (e.g., registration rate, GPA) or school year (graduation rate), interim checks on your process metrics can allow you to course-correct.

Process Metric

  • Number of advising appointments scheduled
  • Change in appointment no-show rates
  • Number of tutoring appointments scheduled
  • Number of appointment summary reports filed
  • Percent of faculty who submitted early alerts
  • Number of degree plans submitted
  • Changes in course attendance
  • Number of resumes submitted to the career center for review
  • Percent response to advisor emails
  • Percent of case referrals closed

Outcome Metric

  • Percent of students who re-registered
  • Number of FAFSA’s submitted
  • Changes in GPA
  • Change in credits attempted/earned
  • D grade, failing, or withdrawing (DFW) rates
  • Term-to-term persistence
  • Graduation rate
  • Time-to-degree
  • Percent of students who came off academic probation
  • Reduction in achievement gap
  • Post-graduation gainful employment

Make sure the process metrics you establish align with your Theory of Change. One of our member schools was using faculty participation in their early alert program as a process metric and they excitedly shared with us that 75% of their faculty had submitted early alerts. But we noticed that most of the early alerts were submitted well after midterms—too late for the intervention (connecting students with tutoring to improve their midterm grades) to take hold. We helped the member school revise their process metric to be “percentage of faculty who submit their early alerts before the sixth week of classes.” That way, if only 10% of faculty have sent progress reports close to the deadline, they could reach out to increase their responses.

4. Assess your results and iterate

The ultimate goal of impact assessment is to determine if your intervention benefited your students. For a statistically relevant analysis, I recommend that you compare the results you’ve tracked for your focus population in one of the following ways:

  • Against a control group. Compare to a similar population that’s not receiving the intervention. For ETSU’s pre-business campaign, we compared those students’ sophomore-to-junior persistence rate to other rising third-year students at the university.

  • Against a past group of students who met the same criteria as your intervention group. Compare with a similar historical population’s outcomes to understand the improvements with your specific population, called a trending analysis. This is useful if you have process and outcomes metrics from the historical population.

  • Against itself. Without a comparison population, you can track the outcomes of your current population to set a baseline for future comparisons. For example, if you conduct a 15-to-Finish campaign with students, you can compare their attempted and earned credits in the term of the intervention to the following term, when you expect them to attempt and earn more credits.

Although I’ve named this as part of step four in this blog, you should ideally designate your comparison group at the same time that you establish the intervention population in step one. The Intervention Effectiveness tool in Navigate allows you to select one of these three different comparison options when analyzing the outcomes of your intervention.

Once you get the results of your analysis, you can use that data to inform the next iteration of your student success campaign. ETSU prompts each unit to reflect on the efficacy of their custom campaign with the following post-campaign survey:

  • Please revise your purpose statement if inaccurate or incomplete.
  • In your opinion, how useful was this campaign in supporting student success?
  • What might you do differently in future campaigns?

The answers to these questions will shed light on what works and what doesn’t and help you justify additional investments when you’re ready to scale your efforts. Conversely, an intervention or initiative that fails to produce the desired results also offers a valuable opportunity to eliminate ineffective initiatives and refocus those resources on more effective initiatives. Or, you can rethink a failed intervention and relaunch it with a new focus or technique.

More resources to measure student success

The adage “it takes a village” is often applied to challenging but worthy missions, and no initiative on your campus is more important than ensuring that students succeed. Read these representative results from EAB members to learn how diverse stakeholders are making an impact.

EAB researchers went directly to over 200 students and asked how they conceive of success. Their answers surprised, delighted, and moved us—and helped us focus our research and development agenda for the upcoming year.

More Blogs

Blog

Unlocking the secrets of GA4 engagement rates

Explore website engagement metrics and benchmarks for colleges and universities, to help assess how your institution compares to…
Enrollment Blog
Blog

Are you on track for successful FVT reporting?

This blog unpacks the new Financial Value Transparency and Gainful Employment (FVT/GE) initiative, and prepares VPEMs for FVT…
Enrollment Blog
Blog

How will we measure student success in the 2020s?

As we think about where student success is headed in the next decade, it’s useful to look back…
Student Success Blog