The interest in measuring research productivity has grown as colleges and universities seek to expand their research awards. Many administrators want to understand their faculty’s successes and how they can support researchers who need a boost.
However, many faculty are concerned about the impact productivity measurements could have on their academic freedom. Here are three of their common concerns.
1. My department is unique
One concern expressed is the variability across departments, even within individual colleges. A standard college of arts and sciences has degrees varying from physics to English literature and everything in between. Faculty are concerned about creating metrics that are consistent across these distinct departments. For example, how many articles and books in English literature are equal to those in physics? How do you compare a multimillion dollar grant in chemistry to a Fulbright fellowship in history?
This gets less confusing when looking at fields that are easier to compare, such as physics and astronomy. However, if you allow departments to set their own criteria, instead of setting them at the college level, is it fair if faculty in astronomy are publishing in the same journals as physics but have to produce more publications and grant money to meet departmental standards?
The proponents of metrics would say there is currently no consistency and these variabilities across departments already exist—thus, some standard would be better than none. But in either case, it’s easy to understand the faculty argument that being a professor in one field is different to another.
2. Quality over quantity
Another common concern is that faculty may begin to focus on gaming the system rather than producing meaningful high quality work. In a system where faculty are measured for productivity, numbers could surpass meaning. Although citation records are one way to measure impact, it may not completely encompass quality. Faculty worry that research activity standards may reward faculty who take the contents of one publication and spread them across five articles. This can be seen as a waste of time not only for faculty, but also for the journal editors, reviewers, and readers who now have to find information across multiple platforms.
Unless the measurements for productivity have a robust way to measure for quality, the designation of “research active” could devolve into a game of maximizing output to satisfy the assessment system, which doesn’t always correlate to scholastic gain.
3. What about the toll on students?
A third common concern is the impact measuring research productivity may have on teaching. Many faculty enter academia not only to conduct research but also to mentor students. A system of meritocracy focused on research is viewed by some faculty as a deterrent to dedicating time and effort to their students. If productivity and successes in the lab are measured, why not those in the classroom?
Additional pressures to publish and secure grants may incentivize faculty to “buy-out” from teaching more of their classes, leaving the experts in the field far from the students who want to learn. Some of these students may have even joined the department assuming the greatest scholars would be at the helm of their classrooms.
The debate around measuring research productivity will undoubtedly continue. While a few disparate departments have found success in implementing a measurement system, it’s clear that faculty are still concerned about this approach and its impact on their scholarship. As institutions continue to consider if and how to measure research activity, they need to consider faculty perspectives and concerns.