Skip navigation
EAB Logo Navigate to the EAB Homepage Navigate to EAB home
Blog

How generative AI is expanding what is possible in research

January 17, 2024, By Molly Bell, Research Analyst

Generative AI (GAI [1]) is ushering in a new era in research, offering unprecedented possibilities and reshaping the landscape of scientific exploration. From detecting when produce is ready to harvest to uncovering text in damaged artifacts, and even talking to whales—these are no longer science fiction concepts, but a few of the early applications of GAI in research. GAI allows faculty to transcend historical bounds of research by expanding human capabilities.

This is no secret, as governments around the world are committing unprecedented amounts of funding to capitalize on the promise of AI research.

  • U.S.: Invests $140 million to establish seven new national AI research institutes after spending $3.3 billion on AI research in 2022
  • Canada: Awards nearly $1.4 billion for AI and robotics research in largest-ever federal university grant
  • U.K.: Announces ÂŁ54 million investment to develop trustworthy AI research
  • Germany: Plans to double public funding for AI research to nearly €1 billion over next couple of years

Our research team at EAB has selected some early examples of how GAI has already expanded what is possible in research across all disciplines. In addition, we explore how CIOs can support the use of GAI in research by positioning IT staff as faculty AI consultants.

[1] Generative Artificial Intelligence (GAI) is the latest development in AI that refers to deep-learning models that can generate high-quality text, images, and other content based on the data the model was trained on.

Generative AI case studies

Explore three examples of innovative GAI research and advancements.

Google DeepMind’s AlphaFold solves the protein folding problem

Historical limitation

Scientists have spent decades trying to map the structure of individual proteins based on the sequence of their amino acids. Previous approaches were trial-and-error and required multimillion-dollar specialized equipment (e.g., nuclear magnetic resonance and x-ray crystallography) to map just a handful of proteins.

AI innovation

AlphaFold, powered by Google’s DeepMind, is an AI system that can accurately predict protein structures almost instantly. It was taught by showing the sequences and structures of around 100,000 known proteins. AlphaFold has already been recognized as a solution to the grand challenge of protein-folding by CASP (Critical Assessment of protein Structure Prediction), a community for researchers to share progress on their predictions against real experimental data. The AlphaFold database now contains more than 200 million protein structures and has been accessed by over a million users in 190 countries.

This system has potentially saved the research world up to one billion years of progress—and trillions of dollars.

AlphaFold enables research that helps scientists to understand and tackle diseases that cause millions of deaths globally. A team at the University of Cambridge uses the system to search for a more effective Malaria vaccine. Researchers at the University of Colorado, Boulder are studying antibiotic resistance—a problem which causes 2.8M infections in the US alone each year.

AI Screener enables universal early screening for children’s speech challenges

Historical limitation

There are over 3.4 million children in America requiring speech and language services and a stark shortage of speech-language pathologists to diagnose and provide these services. Even when a child is able to see a speech pathologist, there are limits to what humans are capable of observing and capturing.

AI innovation

The AI Institute for Exceptional Education, located at the University at Buffalo (UB), is working to develop artificial intelligence (AI) systems to aid young children facing speech and/or language processing challenges. Researchers are developing two systems that use AI to capture and analyze children’s speech patterns in real time. There are initial plans to deploy prototypes to roughly 80 classrooms, impacting 480 kindergartners.

The AI Screener system will analyze video and audio streams of children’s natural interactions, capturing speech, facial expressions, and other gestures. The system will use the data it collects to produce weekly summaries of these interactions highlighting vocabulary, pronunciation, video snippets, and more. Teachers can then use these summaries to monitor students’ speech and language processing abilities and, if required, suggest a more formal evaluation with a speech-language pathologist.

The AI Orchestrator system is an app that recommends personalized content tailored to students’ needs. It will continuously monitor students’ progress and adjust lesson plans based on collected data.

SynGatorTron’s synthetic data eliminates patient privacy concerns

Historical limitation

To protect patient privacy in medical and healthcare research, researchers are required to de-identify patient data in electronic health records, such as name, address and birthdate. This process slows important medical research due to its time-consuming and labor-intensive nature. Additionally, even after all personal information has been removed from patient records there is still a chance that the patient could be identified.

AI innovation

University of Florida Health and Nvidia partnered to develop SynGatorTron, a 5 billion-parameter language model that generates synthetic patient profiles based on health records it’s learned from. Synthetic patient data mimics real populations and can be used for medical research without the risk or privacy concerns that come with real patient data.

The model was trained on a decade of data representing more than 2 million patients. This data allows the model to create synthetic profiles that mimic real health records and can be used to augment limited data sets, collaborate across institutions, produce more data for underrepresented populations, and serve as control groups in clinical trials.

What is IT’s role in enabling faculty AI research?

While GAI is expanding the possibilities for research in all disciplines, across universities a key component in the implementation and utilization of this technology is support from CIOs and IT departments. With more researchers interested in using GAI in their projects, IT staff will need to take on the role of AI consultants to meet their needs.

Four ways IT staff support GAI in research

  • AI tool selection

    Identify which model is best for the use case (e.g., Open AI’s API versus Meta’s open-source Llama 2) and provide knowledge on AI risks and data regulations

  • AI tool development and training

    Training (e.g., prompt-engineering, fine-tuning) different AI models for specific research applications

  • Model monitoring and maintenance

    Provide ongoing monitoring for degradation, adding/removing data as needed and testing for accurate outputs; and monitor system usage for sufficient processing power and data storage

  • Infrastructure support

    Identify and manage compute power and storage necessary for AI projects (e.g., GPUs, on-prem hardware, cloud storage)

 

Spotlight: University at Buffalo’s IT department

The University at Buffalo (UB) repositioned three IT staff into newly created AI-focused “principal architect” roles that are primarily charged with supporting faculty AI tool use. Now the principal architects serve researchers across disciplines as a type of campus AI consultant. They can help researchers identify which AI model is best suited to their project, procure access to AI tools and hardware, fine-tune AI models, and coach research teams on using these systems.

The principal architects have already worked with the School of Management to train their own instance of Meta’s open-source Llama with data from public data sets to be used for research. The CIO at UB is planning to hire three more AI-centered roles that will assist faculty members with using GAI, with skills in the following areas: programming, data visualization, and data engineering.

Our research team at EAB is continuing to explore the possibilities of GAI for colleges and universities as the landscape develops. For more information on innovative ideas for using AI across campus look at our infographic on Unlocking AI’s Potential in Higher Education.

Molly Bell

Molly Bell

Research Analyst

Read Bio

More Blogs

Blog

How to build student-centered institutions: Thought leaders we are reading, listening to, and following right now

To help college and university leaders better understand the concept of digital transformation—and learn how other institutions are…
Strategy Blog
Blog

What could virtual reality mean for higher ed?

Learn four key lessons and ponder questions raised from the Presidential Experience Lab on what virtual reality might…
Strategy Blog
Blog

IT project intake and institutional prioritization process

The topic of this capstone project, part of the Rising Higher Ed Leaders Fellowship, focused on leveraging this…
Strategy Blog