Skip navigation
Research Report

Five Myths About Academic Program Portfolio Review

David Attis, Managing Director, Research

Declines in enrollment and revenues, in some cases exacerbated by COVID-19 and in others a perennial issue, have forced many colleges and universities to take a hard look at their academic cost structure. One increasingly common approach is to review the relative performance of academic programs with the goal of determining which programs are likely to drive future enrollment growth and which may no longer be a priority for the institution. Such program prioritization conversations inevitably create tensions across campus as faculty worry that their department, their major, or even their jobs may be impacted. While there is no single right way to make these difficult decisions, there are a few common mistakes that should be avoided if you decide to undertake such a process.

Myth #1: Cutting programs is a good way to close the current budget deficit

Because academic costs are a significant portion of most college and university budgets, many people assume that cutting academic programs is a quick way to reduce costs. The process of closing a program, however, is a long one. Multiple approvals are needed, students and faculty must be given appropriate notice, and oftentimes current majors must be “taught out” over the course of two or more years. Any savings will not be realized for several budget cycles.

Secondly, while the largest savings from program closures comes from terminating full-time instructors in the program, this rarely happens in practice. Often faculty are moved to other programs or offered financial incentives to retire early. Some instructors may be retained to teach courses required by non-majors even if the major is eliminated. Typically, the only short-term savings is the reduction in stipend and course releases for the program head or department chair and potentially some administrative support staff. While short term savings are limited, however, portfolio realignment can direct future hiring in ways that match resource allocation to institutional priorities. The benefits are likely to be experienced in the future as resources are freed up for investment in strategic priorities.

Myth #2: It’s easy to measure the “value” of an academic program

Oftentimes, program portfolio reviews focus on a handful of simple metrics like number of majors, cost per credit hour, or contribution margin (revenues minus costs). While these metrics are an important starting point, they miss many other critical factors, such as total student credit hours taught, student credit hours taught to non-majors, graduation rates, under-represented students, and graduate school and career placement metrics. Programs with many majors may be serving students poorly, while programs with few majors may be essential to the success of many students outside the major.

Perhaps most challenging to measure is alignment with mission. At all institutions, some programs are more closely aligned with the overall mission than others but quantifying these differences can be challenging. Recognizing that there are important non-quantitative inputs to the process (which require engaging with a broader range of stakeholders) is essential. Yet while the data cannot paint the full picture, it can provide a clear sense of the consequences of any given decision. How many and which students would be affected if this program were discontinued? How many resources would be freed up by discontinuing this program and how could they be better used?

Infographic: Academic Vital Signs

Myth #3: We can’t afford to take the time to build trust in the data

Creating a robust program prioritization rubric is relatively straightforward. There is a limited set of common metrics and on most campuses the data is relatively accessible. Explaining that data to faculty, however, is challenging. Most faculty have not been exposed to the details of how data is tracked at universities, and many will interpret the inevitable inconsistencies in institutional data as a sign that none of it can be trusted.

While engaging faculty (a representative group, not necessarily all faculty) in understanding how the data was produced and explaining apparent inconsistencies can slow down the process, it is critical for building confidence in the method and results.

Myth #4: Quantification can eliminate the role of institutional politics

Senior administrators often hope that a data-informed decision framework can provide an objective solution to a politically charged situation. With so much at stake, however, groups across campus will rally to defend their programs. No matter how clear the data, they will argue that “you can’t be a university (or a liberal arts college) without my program” or that the process was rushed or did not include appropriate representation. In some cases, faculty have won a reprieve for their program despite clear quantitative evidence that it serves very few students.

A key lesson here is that the data are necessary but not sufficient. They must be part of an intentional process that engages with faculty leaders. But without data, the loudest voices win. Instead of doling out resources based on personal relationships or seniority, data in this case can level the playing field—for faculty with less political capital, for departments improving student outcomes or other aspects of the institutional mission.

Myth #5: If you make the difficult decisions now, you won’t have to revisit them for many years

Some academic leaders think of program prioritization like ripping off a band aid—a painful but quick process that avoids lingering on issues for a long period of time. Additionally, the prioritization process can be so traumatic that institutions who have gone through it often feel that they cannot revisit it for at least another 7-10 years.

However, the key to long-term success is a simpler and less contentious annual program health check focused on regular improvement. Intentional metrics and frequent touchpoints help departmental and faculty leaders understand how their regular responsibilities and decisions drive institutional change.

Discover the benefits of annual department reviews

This resource requires EAB partnership access to view.

Access the research report

Learn how you can get access to this resource as well as hands-on support from our experts through Strategic Advisory Services.

Learn More

Already a Partner?

Partner Log In