More facilities departments turn to annual customer satisfaction surveys to measure their institution’s views of facilities performance. However, even campuses with well-designed surveys and high response rates struggle to use this data to improve their operations. Through internal benchmarking and analysis of customer satisfaction information alongside other sources of data, institutions can use the results of their customer satisfaction surveys to inform better facilities decision making.
Use internal benchmarks to measure facilities performance
Many institutions have attempted to use external benchmarks to judge performance, inform reorganization decisions, or advocate for additional resources. Even when institutions have access to customer service data from other campuses, these comparisons are often unhelpful, as most institutions use unique surveys with incomparable questions. Furthermore, customers across various institutions will have different preferences or standards for facilities performance.
Internal benchmarking of customer satisfaction data is far more useful. Annual analysis of both aggregate and unit-level scores clearly demonstrates whether customer service improves or falters. It also allows facilities leaders to focus improvement efforts on the lowest-performing units.
Rigorous statistical analysis can help facilities draw meaningful conclusions about performance trends. For example, the University of Alaska Fairbanks (UAF) not only compares the performance of different facilities units over time, but also uses statistical analysis to ensure changes in satisfaction from year to year are statistically significant at the 95% confidence level. Especially when response rates are low, a perceived change in performance levels from year to year might not reflect an actual change in how customers feel about facilities services.
A step beyond benchmarking
By comparing responses from internal benchmarking across functions or sub-functions, institutions can identify underperforming and excelling departments and build appropriate improvement plans. UAF uses a regression analysis to see what factors have the biggest influence on customer satisfaction. This analysis has helped them isolate specific causes contributing to lower-than-expected customer satisfaction scores.
For example, UAF had received consistently low custodial scores. Custodians were cleaning offices and common spaces at night to minimize disruption to faculty, staff, and students. Based on the data, UAF decided to flip custodians’ schedules so they cleaned during the day to increase their visibility and create relationships with customers. The facilities leader points to this increased interaction between custodians and customers as one of the reasons for improved custodial scores. In addition, customers can now raise any concerns with the custodians directly rather than by phone or computer.
Meanwhile, another institution sorts customer satisfaction survey responses by building, and then analyzes results alongside work order and building condition information. By identifying correlations among customer satisfaction, work orders, and building condition, facilities can see where deferred maintenance is affecting the customer experience and take this information into account when making renewal prioritization decisions. Other institutions have suggested including facility condition index data in this analysis to identify the most critical buildings from a customer perspective.
Ready to build your survey?
The Facilities Forum’s brief, Getting the Most Out of Facilities Customer Satisfaction Surveys, outlines ten lessons to improve survey design, deployment, and analysis. It offers a complete guide to build and launch a survey and then process and share survey results. Download the brief here.