Skip navigation
Blog

Do you have the right kind of small classes?

Three steps to set appropriate max caps and avoid losing instructional capacity to unintentionally small classes
April 3, 2026, By Catherine Flowers, Partner Experience Senior Associate

Many institutions pride themselves on their small class sizes. But there’s a difference between an intentionally small class and an unintentionally small class. When maximum capacities (“max caps”) are set inconsistently or used as a control knob during registration, the analytic signals you rely on to determine course scheduling—like fill rate, seat utilization, and unmet demand—become less reliable. As a result, your course schedule fills up with unintentionally small classes.

At a time when budget pressures mean resource allocation is being carefully monitored, unintentionally small classes are a great place to find opportunities for greater efficiency. Here are three steps to right-size classes so that valuable instructional time is allocated where students need it most.

Step 1: Fix the foundation: course caps

A credible cap should not be a preference; it needs to be a policy. Departments that treat caps as a convenience variable—nudged up or down to secure a favorite room, equalize enrollments, or quietly close a section—end up distorting the very metrics they use to plan. Once caps stabilize, fill rate becomes a trustworthy indicator again and waitlists reflect real scarcity rather than cap manipulation.

Caps should reflect pedagogy and room constraints, not enrollment management. Enrollment should be managed through the registration system (permissions, prerequisites, waitlists), not by setting a cap to zero or editing it throughout the registration period. Here are two steps to achieve correct course caps:

  • Publish brief guidance that ties caps to room characteristics and course format, with a one-line rationale for exceptions.
  • Audit once per registration period for these three signals and follow up with the chair to resolve or document any anomalies:
  1. The cap of a section equals 0
  2. Cap variance between like sections
  3. Unexplained cap drift across terms

What to measure (and how to describe it)

We all know the pain of trying to operate without adequate data. But too much data can also cause confusion. To maintain accurate course caps, keep the list of metrics short and consistent. And to cut down on debates, attach a brief glossary when sharing this data so the definitions travel with the data. 

 We recommend these metrics:

  • “”

    Fill rate/seat utilization

    Number of students enrolled ÷ max cap at census, shown by course level.

  • Resource Card: Percent of sections with fewer than 10 students enrolled

    Percent of sections with fewer than 10 students enrolled

    The share of sections enrolling fewer than 10 students, alongside their share of SCH.

  • ""

    Cap variance (like‑section spread)

    Highest cap minus lowest cap among identical course numbers/modalities.

  • “”

    Repeat small flag

    Appears under the threshold in two consecutive like terms.

  • “”

    Unmet demand

    Waitlisted seats or similar turn‑away indicators during registration.

Step 2: Find and evaluate unintentionally small classes yearly

With reliable caps, you can separate the purposefully small from the accidentally small. The former—seminars, capstones, studios, clinical courses—deserve protection. The latter need attention. Look first for patterns rather than one-off anomalies. These two patterns show up frequently among our partners:

  • Repeat sections with fewer than 10 students enrolled: If a course enrolls under 10 students in consecutive like terms, that’s a strong signal to consolidate or defer unless it sits on the exception list of intentionally small classes.
  • Utilization outliers: Courses with very low seat utilization relative to others at the same level and modality often point to mistimed offerings, too many parallel sections, or a modality or time‑block mismatch with student schedules.

Think of this as a regular operation rather than a once‑a‑decade reset. Here’s a schedule our Edify partners have followed successfully:

  • Quarter 1 – Set the rules. Publish your cap guidance and the short exception template. Name the process owner. Share the three audit checks (listed in the previous section) you’ll run each term.
  • Quarter 2 – Diagnose. Produce a one‑page “watchlist” of repeat instances of sections with fewer than 10 students, low‑utilization outliers, and any cap anomalies. Chairs can add 150–200 words of context for courses on this list.
  • Quarter 3 – Decide. In a meeting, agree to a manageable set of actions: decide which courses to consolidate, retain, and defer, adjust timing or modality, and move capacity to high‑demand courses.
  • Quarter 4 – Verify. After the census, compare your outcomes to your plan. Did consolidations create the seats you expected? Did waitlists shrink in targeted areas? Record any lessons learned and carry only the effective practices forward.

Step 3: Communicate the value of standardizing course caps

When you find courses or sections that should be eliminated, be sure to communicate the value created in the process. In eliminating these courses, you are freeing up instructional resources where they can make the most difference in student progress. Pair every consolidation with a visible reinvestment in a high‑demand area: bottleneck courses, waitlisted pathways, or sections that consistently run above target enrollment.

Conversations about assessing program health and eliminating courses are often sensitive and should be handled with care. Our partners report these common objections to changing max caps, and how they’ve responded to keep the conversation productive.

  • “Rooms are our constraint.” Agreed—so let’s encode room constraints into the cap policy. Most fixes start with like‑section consistency and avoiding cap manipulation aimed at controlling registration, not with new space.
  • “Small classes are part of our identity.” Keep the purposefully small courses on a published exception list and reconfirm them each term. This demonstrates your commitment to keeping intentionally small courses in place and reiterates that the point of reviewing max caps is to eliminate the accidental small courses that siphon capacity and contribute to bottlenecks.
  • “This is just about cutting.” It shouldn’t be. The aim is reallocation: to reduce low‑yield seats and invest in seats that unlock student progress and meet student demand.

Trusted data to understand program health

Consistent data, including max caps, is a critical component of successful program health checkups. EAB’s Edify team has built a set of templatized dashboards to provide Deans, Department Chairs, Provosts, and Chief Business Officers integrated data from the SIS, HR, finance and other data systems. Fill out the form below to learn more. 

In conclusion

Getting max caps right isn’t about shrinking the schedule. It’s about directing precious instructional time to move the most students forward. When caps are governed, small-section outliers are intentional and protected, not accidental. When max caps are assigned appropriately, demand signals become clearer and reinvestment decisions become transparent and repeatable. By running a light audit every term, acting on repeat low-utilization sections, and pairing consolidations with visible seat additions in bottleneck courses, you’ll move from sensitive debates to measurable gains in course access and student progress.

Ready to learn more?

To ensure consistent metrics for productive discussions of instructional resource allocation, Edify—EAB’s higher education data management platform—can help. Fill out the form to the right to speak to an expert and see a customized demo of Edify’s program health analytics.

Catherine Flowers

Catherine Flowers

Partner Experience Senior Associate

Read Bio

More Blogs

Blog

Why data is a cost saver, not a cost center

Learn how a data platform can change how you make important resource decisions.
Data & Analytics Blog
Blog

Can your LMS data predict which students will need support?

Higher ed leaders often wonder: Can our LMS data predict which students will need support? The answer is…
Data & Analytics Blog
Blog

How 76 data leaders are building better data governance

Many colleges and universities approach data governance with the best intentions: they want to clean up their data,…
Data & Analytics Blog

Great to see you today! What can I do for you?