Skip navigation
Blog

What we’re learning about AI governance in higher education

May 14, 2026, By Brett Reinert, Director, Research Advisory Services

By now, most universities have had a chance to reflect on what generative AI means for higher education. And how institutions have responded likely looks a little different in each case. While leaders continue to make decisions and investments, educate users across the institution on risks and safety, and build proof-of-concept tools, many are still grappling with a central question: how should they approach AI adoption at an enterprise level?

A few months ago, we had the pleasure of hosting senior operational and IT leaders at Trinity College for executive-level conversations on AI governance. Both groups identified a common tension between rapid experimentation and the slower development of coordinated approaches. Those discussions highlighted a set of shared challenges, as well as the early approaches universities are using to move towards more structured, institution-wide adoption.

Widespread adoption, but limited coordination

AI is already embedded across higher education: 86% of students and 69% of academic and professional staff are using GenAI tools in some capacity, while just over half of universities report having an AI taskforce or strategy in place. But most universities are still in the earliest phase of adoption, experimenting with AI rather than embedding it into core operations.

That gap matters because the pace of change is accelerating. Demand for GenAI skills is rising quickly, yet students remain unsure how to use these tools effectively and are looking to universities for guidance. In the absence of clear signals from employers or students, universities are being asked to define what responsible and effective AI use looks like for their institutions and their graduates. 

A layered approach to AI governance

So we know we need policy. But ‘policy’ can mean many things, from university statements and guiding principles to operational rules and individual practice.

At Trinity, we focused on three levels of governance that, together, create clarity without constraining innovation:

  • 1

    University level

    At the highest level, a statement signals a commitment to AI literacy and integration. This is less about rules and more about setting expectations for how the institution intends to engage with AI.

  • 2

    Academic and operational units

    At this level, university intent is translated into practice. Acceptable use policies define how AI can be used in ways that protect data privacy, security, and academic integrity. This is where governance becomes actionable.

  • 3

    Individual academics

    At the course level, academics play a critical role in shaping how AI is used in teaching and assessment. Rather than prescribing a single approach, institutions are increasingly asking academics to define their own expectations while ensuring that those expectations are clearly articulated.

What we see most often, however, is a much narrower approach. Many universities begin and end with updates to academic integrity policies, typically adding a line that defines unauthorised AI use as misconduct. What this approach misses, though, is helping academics understand how AI should be integrated in to teaching, and what their role is in guiding students. 

What effective AI governance looks like

Universities are beginning to move beyond high-level policies and towards more coordinated approaches to AI governance. Common approaches include cross-university workgroups that bring together expertise across compliance, teaching, and operations, as well as clearer separation between strategic oversight and daily implementation. Some are also aligning AI governance with existing IT and data structures, recognising that responsible AI use depends on strong data foundations.

What distinguishes the most effective approaches is not the structure itself, but the starting point. Rather than focusing first on tools or investment decisions, these institutions begin with their strategic priorities and then identify where AI can support them. This shift—from ‘AI strategy’ to ‘strategy supported by AI’—helps ensure that adoption remains coordinated, purposeful, and focused on outcomes rather than technology. 

What comes next

The conversations at Trinity reinforced a simple but important reality: AI adoption is already well underway. The challenge now is not whether institutions will engage with AI, but how intentionally they do so. Governance, in this context, is not about control, but clarity—creating the structures, expectations, and shared understanding needed to move from experimentation to meaningful, sustained impact.

Our work at EAB focuses on helping universities identify where AI can most effectively support their strategic priorities. For those continuing this work, the opportunity lies in taking a more coordinated, institution-wide approach to adoption.

Implement Change With Confidence

To learn more about Strategic Advisory Services or speak with an expert, please fill out the form.

Brett Reinert

Brett Reinert

Director, Research Advisory Services

Read Bio

More Blogs

Blog

Where is college athletics headed next? Four potential futures

Explore four potential futures for college athletics based on evolving market pressures and competitive dynamics from EAB's State…
Higher Education Strategy Blog
Blog

What a high-impact Strategic Enrollment Management plan looks like

Build a high-impact SEM strategy that aligns mission, market, pricing, and student success to drive sustainable enrollment growth.
Higher Education Strategy Blog
Blog

A winning platform for higher education in a high-scrutiny era

Higher ed’s social contract is shifting. Learn why public trust is eroding, how accountability is changing, and what…
Higher Education Strategy Blog

Great to see you today! What can I do for you?