Assessing academic programs regularly and rigorously is a good way for institutions to ensure they are supporting programs effectively, investing in ones with growth potential, and identifying problems early so leadership can make appropriate decisions about the program portfolio. Despite its importance, academic program review is something many institutions do not do regularly, in part, because the process is often fraught with difficulties.
These include difficulty accessing needed data, lack of data standardization, lack of trust in exiting data, difficulty determining what aspects of program performance to measure, on-campus tensions about how the results of a program assessment would be used, and concerns about how the decision-making process would fit into shared governance arrangements.
Over the last year, I have been part of the team running the Financial Sustainability Collaborative (FSC), which allows institutions to assess multiple drivers of academic costs and revenue at their institutions. We have worked with over 100 schools to help structure their academic program assessment process, and that experience has provided us with some observations about the common challenges confronting schools undertaking academic program assessment.
Josh Moody's January 5, 2022 Inside Higher Ed article, Pulling the Plug on Philosophy, is about the University of Nebraska at Kearney’s proposal to cut its philosophy major and highlights many of the common pitfalls of program assessment. Deciding to cut a program is always a difficult, but sometimes necessary, decision. Whatever the University of Nebraska at Kearney ultimately does with its philosophy program, diving deeper into some of the points raised in this article can provide insight into the kinds of pushback institutions are likely to encounter when they assess their academic programs.
A program is struggling because of factors outside the program's control
A program’s faculty members may contend that their program is struggling due to factors outside their control such as changes in general education requirements (as in this case) or things like insufficient program-specific marketing, advising that steers students away from a particular program, national trends in student interest, or lack of resources dedicated to the program’s success.
Since many factors outside the control of program directors and faculty influence every program’s success, assessments that are perceived as attempts to hold programs accountable for those factors will likely encounter stiff opposition. Faculty who believe they or their program are being penalized for factors outside their control may doubt the fairness and validity of a program assessment, and may feel they are not being set up for success.
The difficulty in identifying how factors outside a program’s control influence its performance is one reason it is so hard to make decisions about a program’s future, and why the outcomes of a data-driven program assessment cannot, in isolation, dictate a specific course of action. Qualitative factors (e.g., alignment with mission, community engagement) should be considered to promote a more holistic and accurate assessment of all the ways a program brings value to the institution. Additionally, those involved in a program assessment should consider how the factors outside a program’s control may influence its long-term success, and appropriately set goals that take into account the challenges faced by individual programs.
The results of a quantitative program assessment reveal some, but not all, elements of program value. The data and insights gathered by a program assessment are valuable for diagnosing the causes of underperformance, setting performance goals, and developing plans for long-term program viability, all of which should happen in the context of shared governance arrangements with input from those who know the program best.
The data used to assess the program is innacurate or improperly defined
The Inside Higher Ed article noted that there was disagreement about the number of majors in the philosophy program (three according to the administration, eight according to the program director who counted double majors).
This kind of definitional question is easy to overlook during the program assessment process, but can turn into a major problem if different groups have different definitions for the same data point. Defining the data to be used as precisely as possible early on can minimize these kinds of disagreements later and promote a sense that the program evaluation process has been transparent and fair.
A program may be underperforming, but it's vital to a school's mission
Higher education institutions must balance the need to be financially sustainable with their commitment to serving their mission. Often, these priorities are in tension, as we frequently see when institutions undertake an assessment of their programs and realize some of the programs integral to their identity and mission are not performing well when assessed purely on raw numbers.
In these cases, institutions may find more success in meeting both their financial and mission-based goals by seeking to support underperforming programs to perform as well as they can, while accepting that eliminating those programs is not an option, and they may never be financially self-sustaining. Or an institution may ultimately decide that a program is not as central to its mission as previously believed, and make the difficult decision to cut the program.
Making it clear from the beginning of the program assessment process that alignment with mission will be considered and that one possible outcome for underperforming programs is increased support to promote better performance can help smooth the program assessment process.
Decisions about individual programs should also not be made in a vacuum. Programs are part of a larger portfolio which will always have some programs with above average performance, and some programs that perform below average on any given metric.
Expecting every program to succeed against a single set of metrics regardless of conditions specific to that field is not realistic, and ultimately will not help institutions achieve the ideal outcome of a program assessment process: a balanced portfolio. A balanced portfolio needs high-performing programs to help the institution meet its financial and enrollment goals, but it also has room for lower-performing programs that support less quantifiable, but not necessarily less important, institutional goals.
More about academic affairs
Uncover five myths about academic program portfolio review
While there is no single right way to approach this review process, there are a few common mistakes that should be avoided.