How to craft an AI acceptable use policy to protect your campus today
May 1, 2024, By Abhilash Panthagani, Associate Director, Strategic Research Benjamin Zevallos, Research Analyst
Today’s relentless AI news cycle jumps from bullish accounts of widespread and inevitable AI use one day to cautionary tales of its perils the next. Yet few higher education institutions have developed policies or guidelines governing generative AI tool usage for faculty, staff, and students.
While higher education leaders should encourage campus members to explore AI tools, they must also help users understand the immediate risks and guide them on how to use AI responsibly. The AI horror stories and scandals filling our newsfeeds (from Samsung’s sensitive data leaks to CNET discretely using ChatGPT to author articles) distract leaders from the simple steps they can take to mitigate many of these risks.
By creating acceptable use policies for generative AI, or supplementing existing policies with AI guidelines, IT leaders can educate their campus about AI tools and establish terms of usage to protect against serious risks.
-
What is an acceptable use policy?
A baseline set of rules applied by an institution to ensure community members use technology responsibly.
Our EAB research team examined generative AI acceptable use policies and guidelines within higher education and across industries to identify four main components that should shape your advice. Read more about each component below or jump to a specific component:
1. Spell out exactly what you can and cannot use generative AI tools for
Policies and guidelines are not meant to stymie AI tool usage but to encourage the safe and productive use of tools. Guidance should simply clarify for campus users what they can use AI tools for and why certain applications may be inappropriate. Every institution should establish appropriate and inappropriate applications that best fit their unique needs. Find below examples collected across higher ed.
Examples of appropriate and inappropriate applications of generative AI tools
Sourced from higher education institutions and out-of-sector
Appropriate applications
- Syllabus and lesson planning: Instructors can use generative AI to help outline course syllabi and lesson plans, getting suggestions for learning objectives, teaching strategies, and assessment methods. Course materials that the instructor has authored (such as course notes) may be submitted by the instructor.
- Professional development and training presentations: Faculty and staff can use AI to draft materials for potential professional development opportunities, including workshops, conferences, and online courses related to their field.
- Personalized student support: tutoring, translating, academic/career advising, easing administrative processes, brainstorming, editing, accessibility tools and assistive technology.
- Research assistant: finding and summarizing literature, sorting and analyzing data, predictive modeling, creating data visualizations.
- Administrative assistant: automating tasks, drafting/revising communications (e.g., email), transcribing audio.
- Event planning: AI can assist in drafting event plans, including suggesting themes, activities, timelines, and checklists.
Inappropriate applications
The University of Wisconsin–Madison created a “prohibited use and relevant policies” table that outlines all prohibited use of generative AI tools, tying them to relevant policies and explaining why each use was prohibited.
Prohibited use | Relevant policy | Explanation |
---|---|---|
You may not use AI-generated code within institutional IT systems or services without having it reviewed by a human to verify it doesn't have malicious elements. |
UW-503 Cybersecurity risk management, SYS 1039 Risk Management, UW-124 HIPAA Security Risk Management |
Use of malicious code in IT systems or services may threaten or increase the vulnerability of systems and the university data such systems store or transmit. |
You may not use AI tools or services to generate content that enables harassment, threats, defamation, hostile environments, stalking, or illegal discrimination. |
UW-146 Sexual Harassment and Sexual Violence, Regent Policy 25-3 Acceptable Use of Information Technology Resources |
UW-146 prohibits stalking. Regent Policy 25-3 prohibits the use of System IT resources to perpetrate harassment or stalking or to violate laws or policies. |
You may not use AI tools or services to generate content that helps others break federal, state, or local laws; institutional policies, rules, or guidelines; or licensing agreements or contracts. | Regent Policy 25-3 Acceptable Use of Information Technology Resources | System IT resources may not be used to violate laws, policies, or contracts. |
You may not direct AI tools or services to generate content that facilitates sexual harassment, stalking, or sexual exploitation. | UW-146 Sexual Harassment and Sexual Violence, Regent Policy 25-3 Acceptable Use of Information Technology Resources | UW-146 prohibits sexual harassment, stalking and sexual exploitation. Regent Policy 25-3 prohibits the use of System IT resources for harassment and stalking, as well as for storage, display, transmission, or intentional or solicited receipt of material that is or may reasonably be regarded as obscene, sexually explicit, or pornographic. |
You may not use AI tools or services to infringe copyright or other intellectual property rights. |
Regent Policy 25-3 Acceptable Use of Information Technology Resources, SYS 346 Patents and Inventions, UW-4008 Intellectual Property Policy for Research |
System IT resources may not be used to violate copyright or other intellectual property laws. Entering copyrighted material into a generative AI tool or service may effectively result in the creation of a digital copy, which is a copyright violation. Feeding copyrighted material into a generative AI tool or service could “train” the AI to output works that violate the intellectual property rights of the original creator. In addition, entering research results into a generative AI tool or service could constitute premature disclosure, compromising invention patentability. |
2. Specify the types of data users can share with AI tools
First and foremost, IT leaders must educate campus users that any data entered into free, public AI tools (e.g., ChatGPT) may be used as training data, which may be shared outside of the university. They should also remind users when they can turn this option off, as is the case with ChatGPT.
Then, IT leaders must clearly define when users are permitted to share different types of data with approved AI tools to ensure data privacy and security. UW–Madison emphasizes that users may not enter any sensitive, restricted, or protected data into AI tools. This is defined as:
- FERPA-protected information, such as:
- Wiscard ID photos
- University Directory Service data
- Work produced by students to satisfy course requirements
- Student names and grades
- Student disability-related information
- Health information protected by HIPPA
- Information related to employees and their performance
- Intellectual property not publicly available
- Material under confidential review, including research papers and funding proposals
- Information subject to export control
UW-Madison further specifies that users can only share institutional data if the data is public (low risk) or if the AI tool or service has passed appropriate internal reviews.
3. Advocate for licensed and institutionally developed AI tools, and delineate their distinct user guidelines
There are several benefits to using licensed AI tools (e.g., Microsoft 365 Copilot) and institutionally developed tools (e.g., University of California San Diego’s TritonGPT):
1. Safer to use with more types of data
Institutions can negotiate security agreements with vendors for licensed AI tools to ensure data is appropriately handled and disposed of. With institutionally developed tools, internal developers and data scientists have full insight into where data goes and is retained.
2. Better equipped for specific applications
Licensed and institutionally developed AI tools are trained with pertinent data sources to excel in specific applications. For example, UC San Diego’s TritonGPT is trained with a variety of data sources specific to UC San Diego, including their Business Analytics Hub, Course Catalogue, Policies, and Student Financial Solutions. As a result, Triton GPT is optimized for certain tasks like creating job descriptions for campus.
Indiana University’s “Acceptable uses of generative AI services at IU” communicates their support of their licensed tool Microsoft Copilot in favor of public tools and outlines what types of expanded data it is permissible with:
“At IU, Microsoft 365 Copilot is available for use by IU faculty and staff, and it is the recommended way to use generative AI within the IU environment. As part of the university’s enterprise agreement with Microsoft, Copilot is approved to interact with data classified up to and including University-internal data.”
University-internal data is classified as:
- University identification (UID) numbers
- Student home addresses and phone numbers
- Employee offer letters, faculty tenure recommendations
- Basic floor plans showing egress routes and shelter areas
Michigan State University similarly endorsed Copilot usage but warned users to not share any data they would not use with public tools until the tool was formally reviewed.
4. Educate campus on the limitations and weaknesses of AI
Beyond providing explicit restrictions, institutions must also educate campus on the shortcomings of AI tools to help them use AI tools safely.
Michigan State University’s Interim Guidance on Data Uses and Risks of Generative AI includes a section on “Additional risks and limitations of generative AI”:
-
Misinformation and inaccuracies
“Generative AI may generate responses that are not always accurate or up to date. Users should independently verify the information provided by generative AI, especially when it comes to specific facts or rapidly evolving subjects.”
-
Bias and unintentional harm
“Generative AI can inadvertently reflect biases present in the training data. It is crucial to critically evaluate and contextualize the responses generated by generative AI to ensure fair and unbiased information dissemination.”
-
Inappropriate content
“Although generative AI providers may have made efforts to filter out inappropriate content, generative AI may produce or respond to content that is offensive, inappropriate, or violates ethical standards.”
-
Algorithmic implications
“AI can deduce and infer algorithmic criteria other than original intent. This situation can lead to or exacerbate potential bias through the inclusion or reweighting of unintentional variables.”