Skip navigation
EAB Logo Navigate to the EAB Homepage Navigate to EAB home
Podcast

Will ChatGPT Ruin or Improve Higher Education?

Episode 138

February 14, 2023 31 minutes

Summary

EAB’s Michael Fischer and Ron Yanosky discuss whether ChatGPT represents an existential threat to higher education. The two review the current state of play and examine ways that institutions are adapting to force students to complete assignments and demonstrate mastery of course material without the aid of artificial intelligence “shortcuts.”

They also explore ways that institutions might evolve to incorporate the use of AI to complement but not replace a student’s own creative thought.

Transcript

[music]

0:00:10.2 Speaker 1: Hello and welcome to Office Hours with EAB. Well, ChatGPT has arrived and educators everywhere are frankly terrified. Many aren’t certain whether they can ever truly know if any take-home assignment that gets turned in from this day forward, reflects a students own insights and grasp of the material, or whether the work was actually generated in large part by a computer algorithm. Our guests examine ways that colleges are adapting to artificial intelligence and attempting to chart a path forward that embraces the potential of AI to accelerate learning while guarding against the obvious risks. Give these folks a listen and enjoy.

0:00:56.6 Michael Fischer: Welcome to the podcast. My name’s Michael Fischer, I’m a researcher here at EAB and a frequent contributor to Office Hours, and I’m joined today by my colleague Ron Yanosky. Ron, thanks for being here.

0:01:07.1 Ron Yanosky: Thanks for inviting me, Michael.

0:01:10.6 MF: Now, I cannot prove this, but I’m telling you, dear listener, that we are not AI bots that are being streamed to your ears right now. Ron and I are real people. But we are going to be talking about, today, some of the major advances in AI technology that many of you have probably been reading about in the news and that have some pretty significant implications on higher education. Ron, I know that you’ve been researching and thinking about this for decades as someone who’s worked very closely with Chief Information Officers and IT leaders across universities and across EDUCAUSE. How have, historically, we thought about AI in higher education?

0:01:54.4 RY: Well, Michael, you’re right. The topic has been around for a very long time. In fact, the first work in AI was done in the 1950s, demonstration projects at that time. And in fact, it had been conceptualized well before that in the famous paper where Alan Turing proposed a Turing test where you could find an intelligent agent that would be indistinguishable from a human being, establish that as one of the classic tests in Computer Science, that was 1950. And there’s been a lot of progress along the way, often linear progress, but I think things have turned radically in the last year or so. And a lot of that progress has been, for many people, summed up in their experience with ChatGPT. But historically, in IT, we have seen a lot of these moments where it seemed like AI had turned a corner and it was suddenly capable of doing work that would open up whole new areas of application of the technology to practical problems. And often, unfortunately, those have turned out to be disappointments or at least the technology proved not as powerful as you might have thought. So there has been a certain tendency to be skeptical about AI.

0:03:22.3 RY: And there’s this notion that AI is the technology of the future, and it always will be. But I think that we have reached a point where it is gonna start to be able to take on cognitively complex tasks that previously we never really thought machine intelligence could do. And that’s the moment that we find ourselves in.

0:03:44.6 MF: Certainly there have been corporations and even some universities that have made advances in AI that are quite impressive up to this point. But I really think it’s changed over the last six months, has just been the ease of access to the public by these AI technologies. And so often that’s what drives innovation, adaptation, adoption of these technologies, is making it affordable, accessible enough for anyone to be able to pick up and start using in their day-to-day.

0:04:14.5 MF: It’s funny, I think that to some extent in the footnotes for the development of AI, it’s history, there’ll be a reference to Kermit the Frog because that’s how I first heard about these technologies, was the DALL-E AI art generation tool was being used by people across social media to develop different artistic styles of Kermit the Frog in various settings, whether it was a Munch art from Germany to Star Wars settings in a more cinematographic way. It’s really fascinating to see some of the early ways that people are trying to both creatively and efficiently apply that to their day-to-day jobs.

0:04:53.8 RY: Sure. And the truth is, a lot of it has been going on quietly in the background. Tools that we do use pretty frequently have just gotten better in a linear fashion. Think of Google Translate, if you used it in its early days, it was hard to use and it didn’t do that great of a job. More recently, you’ll find it embedded in all kinds of places and the translations are pretty decent. And tools like an Amazon Alexa or the Siri agent on your iPhone, they get better at a slow enough pace that you just get accustomed to them. I think with ChatGPT and you’ve mentioned DALL-E, and there are actually some other tools bouncing around out there, people are now getting their hands on things that have pulled together a lot of different strands of AI that were less visible before, but now they’re suddenly coming together. In some ways, we’ve seen that happen in the past, think of when the iPhone came out, it kind of was a new step in the evolution of smartphones that had actually been around, there were earlier models we’ve all forgotten about now. But it just brought a bunch of stuff together at one point that just felt compelling, and I think that’s where we’re at with some of the new tools now, in AI.

0:06:15.1 MF: And certainly ChatGPT may be the one that has truly mainstreamed the idea of AI, so maybe it’s worth us pausing. And, Ron, give us a sense of what exactly is this AI. What its existing capabilities are, why it seems so novel from a capabilities perspective, alongside the relative ease of access that it provides people to playing around with these new types of technology.

0:06:41.0 RY: So ChatGPT is a product that has been rolled out, it was rolled out at the end of November last year by OpenAI, which is a company that Microsoft has made big investments in as well as other players, Elon Musk being one of them. So there’s been years of development behind this and lots of venture capital money invested in the product. ChatGPT uses a platform or tool that is a distinct separate product from OpenAI called GPT. And that’s basically a platform for developing AI instruments using what are called Large Learning Models, LLMs. It takes a very large amount of content, mostly text-based content, and it draws inferences from the patterns in that content that help it to do things like interpret a prompt or a question, or to formulate a response using what are basically probabilistic methods.

0:07:53.3 RY: ChatGPT has been optimized for the chat format to power things like chatbots, intelligent agents and so on. So it has a particularly conversational quality to it. And that conversational quality, I think, is part of why people find it so compelling. It has this sort of authoritative voice, may be a little too authoritative, because it doesn’t have an underlying intelligent understanding of the world, but it has been trained on very large quantities of data. And so when you give it a prompt, it seems like it can answer your questions from all different kinds of areas, in fact it can write computer code and do lots of other real sorts of things.

0:08:42.0 RY: So between the ability to interact directly with the agents without anything in the way, you don’t have to buy something, you just go online, creative an account on OpenAI and start using the thing. And then the breadth of things that it seems to be able to do really does give you this feeling like there’s an intelligence behind it. It’s a bit of an illusion and ChatGPT gets a lot of things wrong, in fact. But it does feel more like a conversational experience than probably previous tools that people are familiar with that.

0:09:22.9 MF: And certainly because it’s a self-learning mechanism, the more data it collects from people participating in these early experiments, the more accurate and more responsive it should probably be over time.

0:09:34.3 RY: And that’s why OpenAI has made it available that way. Now ChatGPT, as it’s used now, is trained on content through 2021, so you’ll find if you ask about current events, something that happened pretty recently, it’s less able to answer those sorts of things. So it does have a tendency to try to formulate an answer sometimes about things that it really hasn’t got a lot of content on. But it will very explicitly say, well, I can’t tell you about things that happened this month. Just the same, yes, clearly as you train these AIs on more content you go forward, and as you get more feedback about how the responses have been received, they are also collecting that information, you should be getting a better, more accurate, more reliable AI.

0:10:28.1 MF: Let’s talk about the potential implications for higher education. First, perhaps application-wise, where could we see this start being used on campus? I’ll share two early experiments that I saw in the early days of the roll-out. One was from a higher education consultant who asked ChatGPT to craft a university strategic plan, and the document that it created was basically indistinguishable from ones that universities around the world had taken months and years to create with multiple inputs from stakeholders across their campus.

0:11:03.3 MF: And the second was from a professor at a university in the United States Northeast, a business school professor, who asked one AI bot to craft an essay prompt and a grading rubric for that prompt. Then took that information, gave it to another AI and asked it to write an essay in response to that prompt, and then took the essay back to the original AI and asked to grade that essay based off of the information compiled and the original prompt and rubric that it had originally created. Fascinating potential implications for pedagogy, for administrative activity on campus. Where do you think some of the early leads might be when it comes to application?

0:11:46.6 RY: Well, right now, of course, there’s a certain amount of shock and awe in higher education around the recognition that ChatGPT and other kinds of tools can generate things that look a lot like the assignments that we expect of students. And so there is, right now we’re seeing a lot of our partner institutions developing some kind of response to the student use of ChatGPT, re-emphasizing academic honesty policies, reminding students that they should not represent work that was done by an intelligent agent, as if it was their own, and that’s completely reasonable. I do think that we do have most of the tools in place to address that particular concern. But there’s no doubt about it, students are going to be using these tools to complete assignments. So we need to be thinking about what kind of positive uses we can apply AI to and how we can change pedagogy in assignments in such a way to incorporate it. Like you mentioned, Michael, we are seeing faculty members who are using ChatGPT or some other tool as a prompt to initiate a conversation that might be about critical reading or the assessment of an output.

0:13:10.6 RY: I don’t think that at the moment we are seeing tools that could just do… Perhaps grade all the mid-terms in a course, on a reliable basis. Some of this stuff is really a simulacrum, you really poke into it, you start to realize that it’s not the real thing. But I do think that we need to be considering how those tools can be used creatively to help our students maybe master content, understand the differences between a tool that can help you and creative work that you can make contributions to and so on. Of course, there are many other uses outside the classroom. I think that we could see, for example, AI-based intelligent agents doing things like answering questions. How do I fill out a passive form? Sorts of questions that come into the IT help desk that are common questions.

0:14:15.3 RY: In many cases we are already starting to see institutions experiment with that sort of thing, we were seeing that before the pandemic. And then all the concentration on the pandemic, we’ve kinda lost the thread a bit on that. But those are early uses that are starting to creep in around campus. I think we’ll see the institutions picking up on that again and starting to think about how, for example, we might use AI to address some of the talent, recruitment and retention problems or the labor shortage issues, that many institutions face.

0:14:49.2 RY: I will say that’s not a trivial step, we’re not… It’s not straight forward to take the facility you see in ChatGPT and having it reliably answer questions about say, financial aid, in a way that you can really trust it to give your students good solid information.

0:15:08.0 MF: Certainly some of the early corporate applications of this technology that might be able to port over to university life have been things like immediate translation or taking initial contextual language and trying to make it sound more like a natural English speaker. Or drafting first versions of job descriptions or communications and press releases, that then a communication or HR specialist can make slight modifications and tweaks to, but speeds up the process of being able to take these out. But I certainly think there are gonna be some major implications across the universities administration and functionality that blend academic and administrative life together.

0:15:54.2 MF: We certainly are hearing from the people programming these AI that they are trying to put in place safeguards, that’ll create imprints or water marks basically within outputs to prove that they are created by AI intelligence. But as this becomes more mainstream and more widespread, I suspect that the hackers and the innovators will always outpace the sensors and the safe-guarders. And so we may have to have a conversation if technologies like this are as commonplace a decade from now as a calculator is on an iPhone about, how do we do evaluative tests for writing? Do we have to put people in a room without technology, without access to the internet and have them write things by hand or on a computer that isn’t connected to the network to guarantee they were the ones who wrote it?

0:16:42.3 MF: Or perhaps the admissions essay will go away because there’s no guarantee that the student who’s applying was the actual one to write it, versus it was created very quickly by one of these artificial intelligences that are out there. And that will have major implications on how we admit, how we evaluate, how we support staff, and in the ways that we train people on the ethics and morals around using these technologies.

0:17:07.8 RY: Yeah, there are a lot of issues, really very fundamental issues about creativity, about reliability and about just exactly when you can perceive that something is artificial versus real, especially since it has so much power to enhance what is at its core, maybe some kind of human contribution. You mentioned tools, there are already tools for detecting generated content from things like ChatGPT. I’m sure that those tools will get better. Our understanding is that Turnitin, for example, very familiar tool that a lot of instructors use to detect plagiarism, that they’re working on detection tools.

0:17:50.9 RY: OpenAI has a tool kit for detecting it. So they have acknowledged all along that there is a need to do this. But the same sort of dynamic that we’ve seen in cyber security, where there’s just a constant arms race between the ability to defend, protect and identify cyber threats that evolves alongside the malefactors who wanna do something bad, and their own technology gets better over time. I will put in one point of caution to our partners here which is, it still is a fairly labor-intensive and expensive business to train an AI to do a specific local function. It looks like ChatGPT can do all kinds of things, but if you wanted to do some particular thing reliably on your campus, we have found that partners that have done pilot projects in that area, they can succeed with it, but often takes time and labor resources, perhaps larger than you might expect, given the height that we’re seeing right now around the magic of AI.

0:19:03.4 MF: And certainly amidst all the awe and wonder, I think there has been a lot of doom and gloom response amongst the press, amongst some early respondents to this technology, but there are some really powerful and remarkable applications that will have a benefit on our university campuses. I think about sustainability and the ability for us to achieve our carbon net zero or ambitious green goals and having AI support to make slight tweaks to utilities and building automation systems in order to rapidly make adjustments as people move through space and being able to predict accurately where people might be and so what spaces should be heat or lit at various times. That will be extremely powerful and maybe the only way that we’re able to actually become carbon neutral or carbon positive in the long run, given the limited amount of resources there.

0:19:57.5 RY: Exactly.

0:19:57.5 MF: There’s also potential implication for using the technology to try to jump-start educational efforts and eliminate some of the barriers for people to have entry into things like cinematography or art or creative writing by giving people prompts and tools that they can use to put pieces together or try to sequence their initial concepts and ideas and have almost a assistant, a thought partner to bounce ideas off of and get the inspiration that they need to craft something truly creative and interesting.

0:20:32.2 RY: Yeah, it opens up a lot of room to address some questions related to the vastness of the amount of data that we are surrounded by now and which it has proved very difficult to draw conclusions from or to make useful in advancing the institutional mission. You could imagine, for example, in a topic that you and I are both interested in, Michael, the so-called smart campus, one of the issues there has been that smart buildings with the sensors built into them that are constantly monitoring all kinds of building systems, they generate huge streams of data, but it’s difficult data to interpret and it has a tendency to create false positives, say that there’s something wrong when there really isn’t, et cetera. Some of the techniques for learning from large quantities of data that are central to the AI project could be brought to bear on that to help us realize some capabilities that have been elusive so far.

0:21:37.4 RY: Another thing I’ll mention is adaptive learning. The ability, when you have a student struggling with some kind of content, and it’s turning out that the way you’re explaining it isn’t quite working for that student. There might be parts of the content that they understand, other parts they don’t. AI could potentially be developed to explain content to a student in a kind of patient, seeking way, different ways of looking at the issue until that light bulb goes off. So walk the student through the chain rule in calculus or something like that in a number of different ways, hopefully leading to their own enlightenment, guide by the side kind of thing, but with an AI twist.

0:22:24.9 MF: This is a rapidly evolving story. I just this morning been preparing for our conversation today, Ron, read obviously that a couple of news agencies had announced that they were going to lay off certain members of their staff and replace them with artificial intelligence to try to generate the headlines and the bodies of the texts for various features to a story that came out of the video streaming service Twitch of a month-long continuous episode of Seinfeld that the AI was generating. It just never ended. It’s been going on for thousands of hours now, sequencing events again and again in a rudimentary but really powerful way, I think. What should education leaders be doing at this point? It’s probably too early to go all in on AI on your campus. We’re not going to recommend that. But what are some of the early building blocks that you might suggest whether you’re an IT leader or a president, a CBO, a facilities leader or even just an interested stakeholder on campus to start preparing higher education for what I think is probably an inevitable increase in AI involvement in our day-to-day lives?

0:23:31.7 RY: Yeah, and to extend that final thought there, Michael, the consumer world does tend to race ahead of what we do as an enterprise, as an institution. So I would dispense with the illusion that we can kind of create an environment around AI that we can control. So to some extent, we have to be reactive. I do think that on the most urgent question of academic honesty, which is dominating the conversation on campuses right now, we may in fact, be better prepared than we think. So I would certainly suggest reviewing your existing academic honesty policies for any ambiguities or gaps around auto-generated assignments responses. But from what we’ve seen so far, most policies robustly do declare that students should not represent work from external sources as their own. But maybe a joint statement from the provost, maybe with Academic Senate participation, would be a good salient thing to do if your campus hasn’t done that.

0:24:33.2 RY: Beyond that, I would encourage partners to ensure that AI has a place on your strategic agenda. If you are responding to transformative forces, as many of our partners are around enrollment growth, curricular changes, student success issues, DEIJ, etc, you should make sure that AI is a part of that conversation and that you’re investigating how it might assist with realizing goals in those areas. And I think you should be looking at inviting knowledgeable people onto campus, maybe some of your key technology partners to see if AI is going to be embedded in systems that you rely on, maybe bring in some of the vendors that are entering the AI markets so that you can learn better about them. And of course, technology advisory partners like EAB, just to help you understand how AI might fit into your institutional strategy. We also may be redesigning some jobs. You mentioned that in journalism, some of the what we might think of as entry-level sorts of jobs, generating routine content are…

0:25:43.3 RY: Now, there are organizations contemplating generating that through AI. We may have similar kinds of job redesign opportunities on campus. And we have to think about how that’s going to change the way that we’re preparing students for the marketplace, the work marketplace themselves. So a bunch of different areas there. I can sum up also by saying that AI is a form, a special case of digital transformation, which is a topic that has been very much on the minds of higher ed leaders in recent years. And EAB has a framework for developing the capabilities essential to successful digital strategies and digital transformation. So that’s a line of work that I think is very relevant to the AI conversation.

0:26:38.3 MF: I think I would only add to those excellent immediate next steps for campus for leaders to take this opportunity to realize that we are on the cusp of something very transformative. And imagine if you knew back when the Internet was first being developed, how much of an impact that was going to have on your campus across various ways. If you knew in 2007 what the iPhone was going to mean 15 years later, or if you knew at the height that… Just the start of Facebook and Twitter and the social media movement, how disruptive that was going to be to the lives of your students and your faculty and staff. What would you have done early days to prepare yourself for that now inevitable future?

0:27:21.1 MF: We are potentially getting a taste of what might be a year, five years, 10 years from now. And I think the universities and colleges that start future visioning what they would want to do now and start making those investments will be best positioned to be flexible, dynamic, agile in their response to the new innovations and new stakeholder expectations of using this technology and interacting with it as it becomes more mature and more mainstream over the years to come.

0:27:53.2 RY: Yeah, great point, Michael. And I’ll just remind our listeners that EAB does facilitate these future visioning kinds of conversations. There’s one point that I do want to make sure that we cover during our conversation, Michael, and that is significant controversy in the AI world about potential bias built into these systems. AI systems are very sensitive to the content that they are trained on. And that doesn’t necessarily require that the content itself be in some way incomplete or that it’s explicitly got some kind of bias content. AI systems draw inferences from large bodies of data that can be difficult to predict, but will reflect the proportion of text that’s used to describe certain issues or the values even or behaviors that you find in society at large. So we have seen, for example, AI tools that are very good in image recognition at recognizing White men, but aren’t as good in recognizing women of color.

0:29:10.3 RY: Well, it turned out that that’s because the samples that those systems were trained on were weighted heavily representing the data that they collected at large globally to train those systems on. So this is a very deep area. It’s one that is going to require a great deal of investigation and conversation on campuses, but it is not absolutely straightforward to understand when you implement AI to what degree it might be affected by these kinds of problems.

0:29:48.7 MF: And may very well be one of the key areas that higher education needs to be the leader and expert in working with the innovators and entrepreneurs in this space. Ron, so much more that we could chat about, not enough time today, but we’re keeping an eye on this story as it develops. I’m sure that we might be on one of these conversations with all of you in the future to discuss the latest and the greatest in the evolution of AI and higher education. Ron, thanks for joining me today.

0:30:17.4 RY: Been a pleasure, Michael. Let’s do it again.

[music]

0:30:26.7 S1: Thank you for listening. Please join us next week when our experts offer tips for building stronger relationships with community-based organizations to boost your recruiting efforts. Until then, thank you for your time.