4 steps academic leaders must take to integrate AI tools into pedagogy
March 25, 2024, By Abhilash Panthagani, Associate Director, Strategic Research Benjamin Zevallos, Research Analyst
Despite initial fears that ChatGPT would lead to a surge in cheating, higher education is now closing in on a full academic year with generative AI tools. Students on campus have broadly incorporated AI tools into how they approach even their most basic assignments. In fact, over 89% report using ChatGPT to help with a homework assignment, and 48% admit to using it for an at-home test or quiz.
Today, higher ed leaders want to move past AI academic integrity concerns and integrate AI tools into teaching and learning. Many are currently in the process of revising or creating policies to address AI-related issues. While institutions might not need to create entirely new policies (e.g., existing academic integrity policies may cover new AI threats), academic leaders must provide faculty guidance on how to address AI academic integrity questions and a roadmap to integrating AI into pedagogy.
We reviewed AI academic guidelines from over 30 different higher education institutions—ranging from informal documents released by teaching and learning centers to formal directives published by provost offices—and spoke with leaders across the cabinet on how they are addressing AI in academic pedagogy. We’ve identified four steps academic leaders should take to help faculty effectively address and integrate AI into their pedagogy.
1. Discourage the use of AI plagiarism detectors, since they rarely work as advertised
Already, misguided attempts to catch AI-enabled cheating have led to nightmare scenarios for students and faculty alike. AI detection software has proven unreliable, recording high rates of false positives and negatives while exhibiting bias against non-native English speakers. OpenAI scrapped its own detection platform because of poor performance. Even if tools with 99% accuracy existed, the risk of false accusation is at odds with the spirit of education.
Academic leaders must set the tone for the Academy and clearly denounce AI detectors that are questionably effective. Duke University’s Learning Innovation and Lifetime Education office published guidance that clearly recommends against using AI detection software:
“We don’t recommend AI detection software as a part of your AI policy for three main reasons:
- The products are unreliable
- Detection software is biased against certain segments of learners
- As AI changes, detection software cannot keep up”
If faculty still opt for AI detection platforms, Duke suggests that they share this information with students ahead of time and refrain from using them as the only measure of whether a student has cheated.
2. Voice your support for incorporating AI tools in the classroom
While institutions are increasingly advocating for embracing AI in the classroom, faculty and students need clear directives from academic leadership.
The University of Southern California’s academic senate published Instructor Guidelines for Student Use of Generative Artificial Intelligence for Academic Work that clearly embrace AI usage in the classroom:
- “Instructors should encourage USC students to explore generative artificial intelligence (AI), using these new tools to create, analyze, and evaluate new concepts and ideas that inspire them to generate their own academic work.”
The University of Illinois System’s Generative AI Guidance for Students page opens with an ‘Explore Generative AI’ section that encourages students to:
- “Experiment with various generative AI tools to understand affordances and limitations.”
- “Keep current on emerging technologies.”
- “Expect to encounter the unknown and to learn iteratively.”
3. Trust faculty to define the bounds of AI tool use within their courses, but push them to set clear course and assignment-level guidelines
Faculty must clearly state what constitutes appropriate AI use for students, whether they’re crafting AI-proof assignments or using AI tools as a part of coursework. Students are less likely to follow rules regarding AI tool usage if faculty do not clearly define them in course syllabi and assignment rubrics (e.g., some students might not think using ChatGPT output is plagiarizing because it does not come from a real person).
Ohio State University’s (OSU) Teaching and Learning Resource Center urges faculty to set clear expectations for AI tool usage from the syllabi to individual assignment level. OSU provides faculty a course template where they can explicitly define which AI actions are prohibited, permitted, encouraged, and required.
Access a collection of crowdsourced Syllabi Policies for AI Generative Tools, where instructors across institutions and disciplines are sharing their syllabi policies.
4. Facilitate ongoing faculty experimentation with AI tools’ role in pedagogy
There is no tried-and-true playbook for teaching with AI, which puts the impetus on faculty to begin experimenting. We’ve identified three measures academic leaders can take to help encourage and support faculty through this process.
Recommend faculty review of summative and formative data and student feedback to inform how they use AI tools courses
As faculty grapple with AI tools’ role in their courses, they should review student engagement and performance as reflected in summative (e.g., essays, exams) and formative (e.g., quizzes, in-class activities) assessments. Faculty can also directly survey students to source feedback on how they could improve the way they accommodate and incorporate AI into coursework.
Ohio State University’s AI: Considerations for Teaching and Learning guidelines proactively remind faculty that they can leverage focus-group evaluations conducted by the institution’s instructional design consultant to help source student feedback.
Dedicate hands-on training time to build faculty comfort with teaching with AI
Like any new technology, faculty need training to master it. We’ve surfaced three ways institutions are training faculty with AI tools.
-
Online teaching with AI courses
Auburn University developed a Teaching with AI online Canvas course for faculty, which over 600 have already completed. The self-paced course allows faculty to experiment with AI tools and redesign assessments for their own courses while receiving feedback.
-
Discipline-specific AI workshops
The University of Mississippi launched a paid ($500 stipend) three-day AI Summer Institute for Teachers of Writing. In this workshop, 18 faculty attended sessions led by colleagues and produced deliverables to incorporate generative AI into their courses.
-
AI tools coursework consultations
The Business School’s IDEA Lab at Imperial College London works with faculty to stress test how AI tools can be used to complete coursework, so faculty can redesign assessment to accommodate effective and appropriate use of AI tools. The IDEA Lab evaluates the use of AI tools in instruction and assessments based on parameters such as accuracy, clarity, relevance, compliance, referencing, and ease of use.
Provide seed funding to enable faculty experimentation with AI tools in the classroom
As part of its initiative on pedagogical uses of artificial intelligence (IPAI), Georgetown University funded about 40 faculty project proposals that sought to develop and test innovative uses of AI in all types of classroom settings. Funded projects ranged from using students to develop policy approaches to harmful behaviors Large Language Models (LLMs) can exhibit to exploring whether AI can be used as an assistant to students or instructors in the classroom.
The most proposals came from humanities faculty, with science and social sciences faculty tying for second most. The provost’s office appointed a taskforce of faculty and learning design specialists to design the fund and evaluate and select proposals.