This study outlines top-of-mind considerations and strategies for identifying core performance metrics for each administrative unit, as well as methods to set principled action triggers. Download the complete publication to learn more.
Executive summary
Many chief business officers cite a lack of credible data as an impediment to many of their top priorities, including cost-savings initiatives, process improvement efforts, and resource allocation. However, surveys on institutional data management in higher education uniformly show that colleges and universities are tracking more data than ever before, largely due to increased regulatory requirements.
An overwhelming array of metric options
The first challenge to effectively leverage data to drive change is picking the handful of core metrics that best measure unit performance.
Given the countless ways to measure performance, unit leaders often struggle to choose the metrics that truly evaluate operational effectiveness.
Units that track the wrong metrics may waste time on insignificant issues or miss an emerging problem. Worse yet, units that track all possible metrics rather than a manageable set of core measures often fail to extract actionable information, leading to diluted improvement efforts.
Analysis paralysis
While choosing core metrics is the first step in leveraging data to enhance unit performance, data alone does not compel corrective action from unit leaders or senior executives. Without a formal system of red flags, unit leaders often fail to act on negative trends.
Many leaders explain away performance gaps and assume better days are ahead, while others succumb to analysis paralysis—continuing to analyze and re-analyze data while the situation deteriorates.
Consideration 1: Applying a Reality Check
The first step in identifying unit core performance metrics is to set aside any measures that are only infrequently updated, based on untrustworthy data sources, or potentially confusing to unit leaders and staff. Four pragmatic screens to quickly eliminate such metrics are provided below. The first two screens—accessibility of data and frequency of tracking—serve as a litmus test to confirm the availability of data at regular intervals. The second two screens—reliability of data and communicability of concept—test quality and metric relevance.
Suggested Pragmatic Screens
-
Accessibility of Data
Information system must possess the capability to generate data on metrics.
Unrealistic to expect manual data collection and analysis in timely manner for each metric.
-
Frequency of Tracking
Metrics elevated to unit dashboard should be monitored at regular intervals (e.g., monthly or quarterly).
Infrequent (e.g., annual) data updates hamper ability to impact performance in real time.
-
Communicability of Concept
Definition and rationale for metrics should be easy to understand and replicate.
Lack of understanding about metric drivers and relevance hinders manager’s ability to inflect performance.
Consideration 2: Mapping to Strategic Objectives
The second filtering step is to ensure that chosen measures directly link to unit strategic objectives. Without this strategic filter, the chosen metric may not reflect unit priorities and could even promote counterproductive initiatives. While seemingly straightforward, many institutions mistakenly focus on metrics that track progress on specific initiatives related to strategic objectives, rather than progress on the objectives themselves.
For example, metrics that measure compliance with a new Procurement policy are not as valuable as metrics that track on-contract spend. To determine the subset of metrics best linked to larger strategic objectives, unit leaders should utilize the metric strategy map detailed on the facing page.
Consideration 3: Confirming Metrics Benchmarks
The goal of the third filtering step is to identify metrics with credible, objective benchmarks. While past performance is always a reasonable basis for comparison, this insular approach can make performance appear better or worse than reality. Metrics that allow for comparison to top performers merit special consideration for selection as a core metric.
As an alternative, internal benchmarks are often more credible and readily available.
Unfortunately, credible external benchmarks are hard to come by—definitional discrepancies, differences in accounting practices, and demographic factors often invalidate potential comparisons.
Consideration 4: Swapping Lagging for Leading Benchmarks
The fourth filtering step is to assess the remaining metrics on their ability to predict emerging challenges or opportunities and stimulate proactive rather than reactive action. Namely, where feasible, leaders should push lagging metrics “upstream” to identify leading indicators. Unfortunately, it is impossible to sort indicators into separate leading and lagging pick-lists, as categorization is largely dependent upon the rationale for tracking metrics.
For example, HR leaders typically consider vacancy rate a lagging indicator of insufficient recruitment efforts. However, vacancy rate is also a leading indicator of a possible spike in payroll expenses due to future increased reliance on overtime or temporary labor.
Questions for Identifying Leading Metrics
For each core metric, brainstorm potential leading metrics, considering the questions below.
- What are the key drivers of the core metric?
- Which metrics make up the formula for the core metric?
- Which metrics have a defensible link to the challenge the original metric was intended to monitor?
- What processes drive success or failure in the core metric?
- Is there a leading metric for the leading metric—a metric even further upstream?
Consideration 5: Accounting for Unit-Specific Imperatives
The fifth consideration encourages leaders to place a heightened focus on short-term, acute challenges not captured by other selected metrics. Units should reserve one to three core metric slots for time-bound, “hot-seat” metrics—indicators representing acute challenges that managers can meaningfully impact in a fixed time period, ideally less than 12 months.
Dedicated slots for such measures not only guarantee a focus on critical priorities, but also make unit dashboards dynamic documents that evolve and keep staff attention.
Consideration 6: Ensuring Balance of Metric Categories
The final step in the process of identifying core unit metrics is to ensure an equitable distribution of metrics across all unit capabilities or strategic objectives. Without such a distribution, units run the risk of overlooking emerging problems within underrepresented unit areas.
To ensure a proper metric balance, leaders must first sort the tentative list of 8-12 core metrics identified through the selection process into a comprehensive set of categories. Then, units should analyze the distribution of metrics across categories to identify over- and under-represented groups, and make deliberate trade-offs between metrics to achieve balance.
-
3
Metric categorization schemes based on function, perspective, and principles of operation
Setting principled action triggers to compel action
Rigorous metric selection alone does not ensure that dashboards and performance reports compel corrective action when performance lags. In fact, the impact of well-selected core metrics is often dramatically undermined by the failure to stipulate associated “action triggers”—thresholds that signal underperformance on core metrics and mandate a response or action.
The first step to establish principled action triggers requires matching each core measure to the most appropriate trigger type—fixed or relative. As their names suggest, fixed triggers maintain constant threshold levels, while relative triggers self-adjust based on targets, performance trends, and related metrics.
In general, fixed triggers are easier to communicate and therefore manage against, but they are not always applicable for administrative unit metrics. The remainder of this section details how to choose and apply fixed and relative triggers.
As illustrated below, units that monitor data without establishing thresholds that signal the need for corrective action often overanalyze or explain away negative trends while the situation worsens.
Fixed triggers
Fixed triggers are most applicable for core metrics with truly non-negotiable targets, such as compliance with regulatory requirements. Where current performance on a core metric is satisfactory, a fixed trigger can guard against significant performance declines that would likely cause units to miss non-negotiable targets without corrective action.
Relative action triggers
Rather than fixed targets, relative action triggers are based on meaningful performance declines on core metrics. More specifically, relative triggers consider current performance relative to the target, past performance, and/or related metrics to differentiate normal performance fluctuations from concerning trends that warrant action.
Specialty action triggers
While not used often, there are two types of specialty action triggers that serve specific purposes. First, static action triggers guard against performance plateaus. For example, units committed to continuous improvement in specific areas can effectively use “lack of improvement” on key metrics as a trigger for action. However, it is vital to clearly communicate the rationale underlying static action triggers as well as the executive commitment to enforce this type of trigger. Absent this transparency, static action triggers risk being perceived as unprincipled and subsequently ignored.
Compendium of Unit Performance Metrics
Depending on the administrative unit in question, members can apply the six considerations in two ways. Units that begin with a shorter list of potential metrics (e.g., 30-50 metrics) may be able to take a less rigorous, more flexible approach to narrow down to 8-12 core measures. Leaders of these units can skip steps as they see fit and think through considerations independently.
Explore the compendium for metric categories and metrics for a variety of topics like, accounts payable, career services, finance, and human resources.
This resource requires EAB partnership access to view.
Access the research report
Learn how you can get access to this resource as well as hands-on support from our experts through Strategic Advisory Services.
Learn More