Skip navigation
EAB Logo Navigate to the EAB Homepage Navigate to EAB home
Podcast

Can Embracing AI Help Colleges Survive?

Episode 202

July 9, 2024 37 minutes

Summary

EAB’s Sally Amoruso sits down with Paul LeBlanc and George Siemens from Southern New Hampshire University to discuss how higher education institutions can reap the benefits of AI. Paul and George argue that university leaders have been slow to embrace AI despite the extraordinary opportunity the technology gives us to transform higher education for the better. This episode was excerpted from a recent Presidential Experience Lab event hosted by EAB and NVIDIA.

Transcript

[music]

0:00:11.6 Intro: Hello and welcome to Office Hours with EAB. Today’s episode was excerpted from EAB’s most recent Presidential Experience Lab. The labs are events that bring together dozens of university presidents and chancellors, as well as technologists from outside the world of higher education. This particular discussion was led by Sally Amoruso, EAB’s Chief Partner Officer. It featured Paul LeBlanc, the president of Southern New Hampshire University, one of the giants of online higher education. And George Siemens, the chief scientist and architect of SNHU’s human systems. The three discuss how AI will rapidly transform higher education and why that can be a very good thing. It’s a fascinating discussion, so give these folks a listen and enjoy.

0:00:56.5 Sally Amoruso: Let me introduce our two guests. First, Paul LeBlanc, who is for three more weeks, president of Southern New Hampshire?

0:01:09.6 Paul LeBlanc: Yeah. Three more weeks.

0:01:11.4 SA: Three more weeks. Since 2003 under Paul’s leadership, SNHU has grown from 2,800 students to over, you always correct me on this number, I have 160,000.

0:01:21.5 PL: 250,000.

0:01:22.8 SA: 250,000 learners. Is the largest nonprofit provider of online higher education in the country. The university was number 12 on Fast Company’s Magazine’s World’s 50 Most Innovative Companies list and was the only university included. Forbes Magazine has listed him as one of its 15 classroom revolutionaries and one of the most influential people in higher education. Washington Monthly named him one of America’s 10 most innovative university presidents. In 2018, he won the prestigious TIAA Institute Hesburgh Award for Leadership Excellence in Higher Education. He served as senior policy advisor to Under Secretary Ted Mitchell at the time at the US Department of Ed, working on competency-based education, a topic that came up, new accreditation pathways and innovation. He serves on the National Advisory Committee on Institutional Quality and Integrity and on the National Academies of Sciences, Engineering and Medicine’s Board on Higher Education and Workforce, and served on its Committee on Quality and Undergraduate Education. Also serves…

0:02:29.0 PL: Sally. Sally, I’m not dying. I don’t need [laughter] I’m just studied down on my role.

0:02:32.6 SA: I’m almost done. [laughter]

0:02:34.3 PL: I think we’re good [laughter]

0:02:35.0 SA: I’m almost done. He serves on the ACE board. He chairs the AGB Council of Presidents. Personal story, he immigrated to the US as a child from Canada, was the first person in his extended family to attend college as a grad of Framingham State, Boston College, and the University of Massachusetts. And then he does have a previous career at a technology startup for a publishing company. He was also president of Marlboro College before he became president of SNHU. And in three weeks, he is expecting his first grandchild.

0:03:10.7 PL: Finally, we get to the important part [laughter]

0:03:14.2 SA: Yes. [laughter] Yes. All right. So welcome, Paul.

0:03:17.0 PL: Thank you, Sally.

0:03:18.5 SA: George Siemens researches how human and artificial cognition intersect in knowledge processes. He is the co-founder, chief scientist, and architect of SNHU’s Human Systems, where he’s going to be working with Paul, an organization building resources to respond to the systemic impact of AI on learning and wellness. He is the founding director and professor of the Center of Change and Complexity in Learning at the University of South Australia and developed the Master’s of Science in Learning Analytics at UT Arlington. He has delivered keynote addresses in more than 40 countries on the influence of technology and media on education, organizations and society and has been published everywhere.

0:04:03.6 SA: He’s received numerous awards, including honorary doctorates from the Universidad de San Martín de Porres and Fraser Valley University for his pioneering work in learning technology and networks. He’s a founding president of the Society for Learning Analytics Research, has advised governments and agencies as well as numerous international universities on digital learning, utilizing learning analytics for assessing and evaluating productivity gains in the education sector and improving learner results. In 2008, he pioneered MOOCs. And he is the founding president of the Global Research Alliance for AI in Learning and Education. Phew! Welcome, George.

0:04:44.8 George Siemens: Are we done? Do we have time for questions?

0:04:48.3 SA: Welcome, welcome. All right. So, Paul, a few months ago, you announced your decision to step down from the presidency of SNHU after 21, what I will say, remarkable years of leadership and transformation. Tell us why you decided to do that and how you’re going to spend your time and energies.

0:05:08.5 PL: Sure. I mean, after 21 years, not much comes around the track that you haven’t seen before, right? And I have a good friend from Arkansas who has wonderful expressions. He said, you know, Paul, if you’ve got a pile of dirt in your living room and you walk around it every day, at some point you just forget you have a pile of dirt in your living room until company comes over. And I think, you know, the place needs some fresh eyes. It needs some fresh perspective. And I feel like I’ve done the work that I had hoped to do when I arrived. And it’s an…

0:05:32.5 PL: I think, you know, three years ago I said to the board, my primary responsibility now is to give you good successors. And they found a great one, a longtime friend of 27 years. But the second thing that happened is that George and I were at ASU GSB in April of ’22, ’23, excuse me. And we were having coffee. We’ve known each other for a long time and bemoaning our sense. And unlike Matt, as Matt left, we’re sort of in the more radical camp. We think the whole world is utterly changed. Like I don’t agree, and I love Matt, so I disagree with him, guardedly here, but I would go more radical than he would on his analysis of what happens in workforce.

0:06:13.7 PL: And among our observations was that we felt like a lot of colleagues and boards of trustees were not fully grasping the tidal wave, that is about to wash over higher education society more generally. I knew that I would be stepping down soon. That hadn’t been announced at that time, but I shared it with George and I said, why don’t you leave UT and Adelaide and come join me and let’s try to reinvent learning? So just small, you know, not very much hubris in that notion [laughter] But George reached out his hands, we shook on it and agreed to do it. So Human Systems is our company. If you go to humansystems.com, you will see the least informative website in the history of the Internet. And what I was trying to do is take a clean sheet of paper approach to, how would we redesign learning if we were unconstrained?

0:07:00.3 PL: So, no assumptions about accreditors, Title IV, the roles of anyone. What would it look like and what would it look like if you could make it intensely human-centered? So really focusing on human flourishing, ’cause we actually don’t think humans are the most powerful entities on the planet anymore, in terms of huge bodies of knowledge or kinds of knowledge. And if that’s not the case, what are we about? And I’ll end on this note, which is, and it’s from George, I’ll steal it from before he says it. We really fundamentally think that our work will shift from questions of epistemology, which people need to know. If you think about the major, doesn’t it really just answer the question, what do you need to know to be and fill in the blank, a nurse, a doctor, a lawyer, et cetera, that will shift from questions of epistemology to questions of ontology. How will you be in the world? So that we can unpack for a while.

0:07:50.6 SA: Yes, we will. George, your background is so fascinating to me. You grew up a Mennonite. And so without a lot of exposure to technology, and yet you became one of the forerunners in the advent of MOOCs and an expert in understanding learning in the digital age. Can you make sense of that journey for us a little bit?

0:08:12.9 PL: Not in therapy.

[laughter]

0:08:12.9 GS: It’s come a long way. You should have seen me in my teens. No, so I did. I mean, I remember, so I grew up in a Mennonite colony in Mexico. We had no technology. And literally I mean no technology, but because the rules of the church, we could have a tractor as long as you took the rubber tires off the tractor. But then the problem is the tractors would get stuck when it rained and you had to get the horses to pull the tractor out, because you didn’t have tires on it. So it’s a very complex world. But I came to Canada when I was about six, seven years old and entered the school system. And I was always drawn to technology, like computers, how that worked. One of the first things I had to do, I think, was a good old Commodore PET, where poor Ozzy Osbourne’s Bark at the Moon had to be sacrificed for a tank game.

0:08:57.9 GS: You had to put the little cassette, you had to put the little tape over you, so you could get the recording done. But I think that was really my first exposure, and it was this sense of there’s something here that’s not us, it’s not human intelligence, but it’s the speed at which this thing works, even though it’s all rules and computation, is something different in terms of capability for intelligence. And we’ve just been running toward that something different kind of intelligence for the last 40 years. And it’s starting to become a interesting, perhaps threatening, perhaps opportunistic type of intelligence for us to engage in as it gets better and faster and faster. And so it’s still that same trajectory.

0:09:38.0 SA: Love it. Paul, in our EAB State of the Sector, Reckoning with Relevance, we discuss the challenge right now, with managing across different time horizons. So we have immediate challenges. We profile six priorities. There are immediate challenges and opportunities. And then we try to make space for longer term planning and visioning. You are very conscious of this in the context of AI, that there are some operational opportunities and challenges, but that we need to make space for the larger. Can you speak to that?

0:10:10.3 PL: Sure. I’m going to fire off various book references, which I’ll share with Sally afterwards to write them down. But a book that was very influential when we began this discussion is a book called Power and Prediction. It’s written by three economists at the University of Toronto’s Rotman School. They talk about the power of AI in terms of productivity, efficiency, all kinds of capacities, that we’ve been talking a little bit about today. When you deploy them across an enterprise in various point solutions. So a good example of this is because we’re pretty large, we have a lot of transfer credit staff, 280 of them. I think in another 12 months, we probably won’t need more than 20, right? That’s a point solution.

0:10:50.2 PL: And what I’ve said to HR is. What do we do? Like, let’s get ahead of this, right? Let’s train these folks for other jobs, ’cause I’m not going to be that guy who, well, I won’t be that guy. I’m not going to be there. But my successor won’t be that woman who lays off 270 people because of AI. But those point solutions are being deployed across, like all of yours, I’m sure, right? Marketing, HR, curriculum development, instructional design, on and on and on. But they also argue. That because point solutions are always constrained, they’re a part of a system, they’re always constrained by the other parts of the system. And in our world, it’s regulatory environment as well. And I think you can predict a serious backlash from this Department of Ed who thinks, you know, that thinks two cups and a string is innovation. So, you will see a backlash for sure.

[overlapping conversation]

0:11:34.5 PL: Oh, that’s all right. I told them that. Now I’m going out. What are they going to do to me? [laughter] So then the second thing they argue for is that the real power of AI gets unleashed when you think of a full system redesign. Like, what would you do if you had a clean sheet of paper, which is what led to the creation of Human Systems. SNHU is doing AI all over the place, but we are wholly detached from that work. This is what do you build if you were unconstrained? So I think for university presidents who are thinking about this, if you have the capacity, and you could do this collectively as well, but one is do what you’re doing, which is encourage your people to play with it, think about all the ways it can be used, get smart about it, get a lot of exposure like we’re doing here. But separately, can you carve out a little bit of space, and it would be a small team, and invite them to design a program unconstrained by the way you do business today?

0:12:34.9 PL: But to be really smart about how they’re rethinking all of their assumptions about how learning can take place. So, all of this gets informed by Clay Christensen, a friend of 40 years, member of my board. And what he would say is that the way we’re using AI, is to play the game as it is presently played, but more effectively and more efficiently and more cost effectively. But this other effort is actually to reinvent the game. And can you get some folks to help you reinvent the game? Or can you get a collective or a coalition? Or have more limited resources to think about what does it look like to reinvent the game.

0:13:07.7 SA: Which is exactly what you did in 2010, when you separated your online operations from your small residential campus.

0:13:15.8 PL: Yeah, much earlier, actually, 2004.

0:13:17.8 SA: 2004.

0:13:20.1 PL: But we literally moved them two miles away, got them out of sight, out of mind, negotiated some breathing room with our faculty governance model and gave them permission to play differently. And then my job as a leader was to get the resources and hold off the mothership. Because as Clay taught us, not ill intent. It’s not. I mean, that was a key point that’s often missed in his research. The incumbent organizations see disruptive innovation the way the body sees foreign tissue. It either wants to spit it out or it wants to incorporate it, take hold of it. So when the faculty say, “Sure, we’ll allow you to do this, but we will do it within our governance model.” That’s the kiss of death. Right? It’s good. So, yeah, exactly what you have to do is what we did for online. We did when we launched college for America, which was our direct assessment program, which is the first of its kind.

0:14:08.2 SA: Yeah. George, we believe that inherent in these shifts is an opportunity for higher ed to reassert its relevance, particularly vis-a-vis the world of work. And I think a lot of the conversation this afternoon goes to that. But you have been very open about saying, you’re less than blown away by higher ed’s response. Can you explain your…

0:14:30.8 GS: Betrayed, saddened. So this is the grumpy part of the segment. So I’ll just be quite direct and say I think, for people who first played with Generative AI, and even before Generative AI, Osborne and Frey did a report years ago that said, Hey, enormous swaths of the economy are going to be re-architected through the development of AI. And so AI wasn’t a surprise. If you go back to 2004, 2005, we had that magical confluence of GPUs, namely NVIDIA with Fei-Fei Li’s Image Network, with the work that Ilya was doing with Geoffrey Hinton at the University of Toronto with new deep learning models or neural networks. And we had the basis of that. We started to see, you know, EFF had a AI progress index that indicated every domain of human cognitive activity where, AI was outperforming and overtaking humans at things like image detection, game-based strategies, chess, and so on.

0:15:33.1 GS: So we’ve been losing the AI game to AI from a cognitive lens for probably about 15, 20 years in a reasonable way that is very, and you can anticipate it and you could see it coming. I think what happened is when you sat down first time with ChatGPT and there was, this was AI out of the labs and into a consumer space. I was, which is my frustration, so when Paul and I met, we were aware that universities weren’t seeing that. And it just seemed amazing that this one thing, that is likely the single greatest competitor or the first injection of Novel Intelligence since our neocortex came online into Earth.

0:16:10.9 GS: And this was something that we as universities said. I don’t know, I guess I’ll figure out what we do with our semester next year. [laughter] And nobody paid attention to this. You have to do one of two things. I mean, this is my view on it. You either have to say, This is the biggest thing that’s confronted us as a species, roughly ever, or you have to look at it in a diligent, thoughtful way and say, this is BS. Like I don’t think there’s a middle of the road thing. And I think I would respect the university leader that intentionally looked at AI and said, “This is absolute nonsense. I don’t want any of it. We are going to teach the way we teach, and we’re all going to be happy.” I would respect that, ’cause it shows intentionality.

0:16:46.4 GS: What I see now is an abdication of responsibility, because it’s not novel entirely. I remember when online learning and digital learning started a trajectory. There were some universities, some of the early ones, Penn State, Michigan got into it early, obviously SNHU early. But many universities didn’t and they paid an enormous cost and that cost came in the form of OPM taxes, if you will. Because they failed to develop institutional capabilities to be active participants in their own future.

0:17:17.1 GS: Now, if it’s something like, Hey, we want to build a new building and a core competency is not building our own buildings, I’d say, great, that makes sense. But if your core competency is building the knowledge capabilities of society, and now there is a thing that can out-knowledge capability you, I’d sure as hell want to figure out what that is and what that means. And I’d sit down and I’d spend every waking cycle focused on the single biggest threat to humanity or opportunity for humanity that I could. So that’s my lens. And so then I look at it and I would say, well, from a university lens, I mean, there’s enormous implications, obviously, how you operate, what you teach, what you plan to teach. But in no conceivable world is business as usual a strategy that seems to make sense to me.

0:18:00.2 SA: Thank you.

0:18:00.7 PL: I’m better now.

0:18:00.8 GS: [laughter] Get that out of the system. You’ll be buying the first one for all of you at the bar tonight.

0:18:11.8 SA: I did invite them to be provocative because if not here, then where, right? And I thank you for that and for your candor. You both have talked about this shift from epistemological or knowledge-based systems to an ontological ways of being system. What does that really mean? Make that real for us.

0:18:30.0 PL: Sure. PwC just invested $3.5 billion in AI. It’s not so they can hire more accountants. And I think, you know, we see this in law. We see it in lots of fields. If you want a more optimistic view of this, read David Autor, the MIT workforce economist, has a great piece on the reinvigoration of middle-class jobs. And it’s in a funny name magazine called NOEMA, N-O-E-M-A, and you saw David Autor, A-U-T-O-R, NOEMA, Middle class jobs. But he argues that really AI is going to threaten high levels of expensive, rare expertise. So his argument, the example he uses, I’m giving this talk at the Harvard Medical School next week, this week over really big. His argument is that we don’t have enough doctors.

0:19:18.5 PL: It’s super expensive to train them. It takes more than 10 years. It’s over a million and a half, et cetera, et cetera. And if you think about what doctors are trained to do, at some level, core is prediction. You present with symptoms, they’re making some prediction about what you have, then they order all the tests to narrow that down and confirm, and then they predict prognosis, and then they predict with treatment. But so much of what they do is about prediction aided by all kinds of tools. And that’s what AI is much better than us.

0:19:50.4 PL: Like, you take that, I don’t want AI, you know, giving me my medical advice. Like, really? Because your best trained physician who’s been in the field for 40 years, who’s a superstar, and who has seen 50,000 patients. That AI has been trained on millions of patients and hundreds of thousands of doctors. And if you say, wait a minute, it’s still hallucinating, we’re in the camps like. Get over that, because that’s for now and it’s going to get a lot… It’s already getting a lot better a lot faster and we heard some of the ways that happens today. So what will doctors, what could we do then? We could flood the American society with nurses, who have foundational knowledge, who know how to put a PICC line in, how to take it down, all these other things. They have human knowledge. We often like our nurses better than our doctors. They seem like nicer human beings, no offense to doctors in the room. And now they will have exactly the same expertise that that physician was trained on. And those jobs are solidly middle class.

0:20:48.4 PL: They start at $130,000 on average. And huge swaths of our society don’t have access to good medical care, to doctors, and even those with privilege today wait longer and longer to see a physician. We could rethink this. So he argues that the threat will be on rare, expensive areas of expertise like lawyers and doctors, and we could reinvigorate the middle class. So now for legal advice, you could go to a paralegal who has foundational skills, human skills, and know all the expertise of someone who went to the finest law school in the country. It’s an interesting question. So when I talk about this, and I talk about it in my book Broken in 2022, which is, as we saw, and that was written before our ChatGPT, but with the knowledge that knowledge jobs will be displaced, the World Economic Forum says 85 million displaced jobs next year alone, with knowledge jobs being displaced at high, high numbers, what about human jobs?

0:21:43.8 PL: Now, you’re going to say, Wait a minute, we don’t like to pay for human jobs. We don’t give them status. But we could flood our K-12 system, need to flood our K-12 system, with amazing teachers, social workers, counselors, coaches, and staff. We could rebuild a mental health care system, that is now essentially just our prison system. It’s completely decimated. We could create in American aging society, a compassionate, affordable caring system of geriatric care. We don’t have that. We could rebuild aging infrastructure across the country. None of those are AI jobs. They may be aided by AI, every job will be, but they’re not AI jobs. We just want to pay for them. We don’t give them much status.

0:22:21.8 PL: Because they’re human, they’re messy, so we can’t measure them as easily. My hope, my optimistic hope, is that Carlota Perez, who I love, who you may know is an economist, Venezuelan-British economist, and she argues, that when we see a paradigm shift of this order of magnitude historically, everything is up for grabs. So the jobs that used to be, have status and wealth go away or get displaced, jobs that weren’t got emerged. So if you use steam, electricity and industrialization as collective, what you saw was absolute rise of middle class. You saw consumer society ’cause now you have more people who could afford things. You saw the kind of displacement of the agrarian royalty and aristocracy. You saw huge urban centers. You saw the creation of capitalism, because you need a lot of capital to build a factory. You saw global trade, because you need to think about supply. Like, the whole world changes. She would also say, so, sorry, buckle up.

0:23:17.3 PL: It’s also when you see revolutions, social unrest, the world gets changed utterly. And I think we’re in for a decades of massive, uncomfortable change right now. But at the end of this, we rethink the nature of human work to say, let’s focus on that which is human, which makes people better, which lifts people up, which actually creates community. I don’t know, that feels like a hell of a lot better than what I’m looking around and seeing today. Like, how well is this all working? Right? So that’s the optimistic view.

0:23:54.4 SA: Thank you. George, do you want to add anything to that?

0:23:57.2 GS: No, I think that nails that critical shift. I think one of the questions that’s worth focusing on is we’ve heard for decades that, you know, it’s not just the technical skills. World Economic Forum routinely releases a report that lists critical skills, much in line with what you had shared earlier. And it’s they aren’t, you know, can you program or, are you necessarily a brilliant engineer? It’s not that those don’t matter. It’s that a lot of the skills that are coveted and anticipated future skills are human-based skills. Now, we have a hard time finding the right word, is it? Non-cognitive skills, is it social skills, is it future literacies, is it social-emotional literacies, and so on. But some flavor of existing in the world as a human being seem, to be critical skills that we need to develop in our learners.

0:24:45.9 GS: And I think for me, it’s a pretty simple kind of model which is, if the things we’re teaching right now are forecast to increasingly be capable of being done by AI, and they are, as you’re aware, report after report will address. LLMs as they’re becoming integrated, and Karpathy has this idea of an LLM operating system, and one of the sessions I was at this afternoon was talking about multi-agent or multi-model, small model LLMs that do some core policing aspect over other functions of it. We’re starting to see, much like the first time you got a routine laptop, which in my case was a 40-megabyte hard drive. And it changed my world, because I could do things I couldn’t do before.

0:25:28.5 GS: And so we’re going to have a lot of data science work and computation activity and knowledge exploration work, that we’re learning in our classrooms right now suddenly being done by some LLM/AI integrated system. What are we going to be teaching then? And what are our students going to be doing in a society where that functionality is possible? On the one hand, we’re going to have, I think, an enormous leap forward in productivity. And Sequoia did a workshop on this a few months ago, where they were just addressing, like, it is a period of massive human expansion and human capability. We are going to in our lifetime see diseases solved that we just couldn’t imagine. Massive advancements that were Star Trek-ish years ago, that are going to be landing on our doorstep almost weekly.

0:26:15.7 GS: I mean, I look at what’s going on with AI advancement and I try and track what’s happening, you know, really globally. And you literally cannot. Like, it is. The change in the progress from new models to new capabilities being advanced to, you know, what used to be novel like RAG transitioning into GraphRAG transitioning into the set of tooling that’s coming up from Llama to DSPy to related tools that are changing how we’re doing this. And then just yesterday or two days ago, I saw the NIMs, the NVIDIA NIMs. Announcement and suddenly you’re like, okay, next thing you know that there’s a demo site that’s available. This isn’t how the world used to work. Do you know what I mean? We’re in it, but it’s all new and it’s accelerating.

0:26:54.0 GS: So, I think that’s why skills of resilience, can you cope well? Can you work well with others? Can you find community and spaces of community? So, I think because we’ve lost the intelligence race to AI, We can no longer put our primary human capability on intelligence. We need to put our primary human capability on emotions and our humanity. And so it’s almost we’ve spent our entire evolutionary history over the last 4,000, 5,000 years, competing on the thing that AI does better than we do. And now we’re like, Ah, shit, I guess I’ll be a nicer human being now. [laughter] So, I think we have to, the university system somehow has to absorb that core transition and make it an educational feature in meeting that need.

0:27:39.0 PL: Chris Didier at Harvard makes this distinction between prediction and wisdom or judgment. And I think it’s a very useful one. So if you think about, go back to my example of doctors, so much of their training and so much of their time spent with you is really about this work of prediction. Now, AI does that, does it really well, does it really fast, does it very accurately. So now the conversation can be the one that we so often wish we had with our doctor. And it’s the one that says, God forbid, hey, Sally, this is a pretty terrible diagnosis. Tell me about the conversation you’re going to have tonight at the dinner table with your family. Tell me about how you think about quality of life. Tell me about your community, your faith system. How will you make decisions? Do you want to do this with me? Do you want to do it with your family?

0:28:19.6 PL: How are we going to navigate this water? Our AI assistant will tell us the best choices we have, but how are we going to make those choices? Like, when you ask patients and you look at hospital systems, like what they wish they had, they wish they had a doctor who knew them as a human being. They want them to be really good at their work. Well, AI will do that, but they want somebody who knows them as a human being. And if you think about it, I often will do this little parlor trick when I give these talks. Mark Schlapman, Elizabeth Collins, Helen Heineman, the three teachers that changed my life. Sixth grade, high school, college. They were transformative. Were they good teachers? I think so. I don’t remember.

0:28:56.7 PL: I assume they were. But what they did is they made me feel like I mattered. They took time to know me. They understood my context. They gave me their time. If I walked in to talk about a paper, they could look at me and go, Hey, you look off the game. What’s going on? And they actually meant it. They wanted to know that. They lifted my sights. They looked right. These were incredibly human, empathetic, right? These were not knowledge-driven interactions, and yet they were transformative. And then I had scores of teachers. So I have to ask the question, raise your hand, how many of those teachers did you have? I’ll tell you what the average is. You can use both hands if you like. You had five, you had two. Yeah, two, three, the average is three. I’ve done this everywhere. The average is three. So imagine if we can build the knowledge transfer, the stuff that we ask teachers to do, and a lot of the administrative trivia, and they could actually know kids. They could actually engage and know their kids.

0:29:49.4 PL: That would be pretty powerful. And I think AI is going to give us this invitation and opportunity to think about human-based jobs. And I’ll finish by quoting Russell Stuart, who’s the Berkeley computer scientist, he has a set of great lectures on the BBC, the Reith Lectures, if you haven’t heard them. He said, imagine if we had said to our ancestors, talking about the knowledge economy we all live in, imagine if you said to our ancestors, every day you’re going to travel to a big box. It’s called an office building, but it’s a box. And you’ll go inside and you’ll sit in a little box called a cubicle. And for eight hours, you’ll stare at a little lit-up box called a monitor. They’ll say, Oh my God, that sounds like hell. [laughter] And we train enormous numbers of graduates to do just that. What if you said, like, think about if we paid teachers well, how they talk about their work. Healthcare workers, when we support and pay them well, what do they say? It’s calling, it’s meaningful. Has anyone called to a spreadsheet? I don’t think so. So that’s the hope. Yeah.

0:30:46.1 SA: Thank you. Tell us about the Global Data Consortium and your involvement in it and why you think it’s important for presidents to understand what it is.

0:30:57.7 PL: George can take this one.

0:30:58.7 SA: Okay.

0:31:00.2 GS: Sure. So, this started from a conversation with Paul as well, where we were looking at big jumps in AI are accompanied as much with, well, fundamentally three things. One is advances in algorithmic capacity, such as neural networks, deep learning, advances in computation or advances in data and data quality. And universities actually have some of the best structured data that exists, from our syllabi to our lesson activities to student interaction data to student help seeking data and so on. So the conversation turned to, well, what if we, as we’re starting to get deeper into this AI centric university model, what do we need to be able to test, train and verify the work that we’re doing?

0:31:43.6 GS: So we initiated this concept of a data consortium and reached out to a number of groups or universities to see if they had expressed interest in getting involved and ended up, I think through Paul raised a discussion with Ted Mitchell at ACE, to see if they’d like to host this as a nonprofit. We don’t have it from a profit seeking end. So the document’s available. Happy to share that with anyone to look at. We’re currently in a discussion phase, where we’re seeking input from researchers, from data scientists, technical people, and from university leadership. So the goal really is to say. We’re not going to make our way through the AI landscape alone. It’s very much going to have to be a network of engagement with peers and peer institutions globally.

0:32:25.7 GS: So, the interest is how do we create and share data that allows us to work together to solve complex educational problems, while simultaneously building our institutional AI capabilities. So, simple example, let’s say you’ve got 50 institutions represented here. And if you all decided we get a lot of student input, students seeking help, and we record that in some way. It could be an audio recording, it could just be, you know, it sits in a student help file somewhere or database. What if you were to get together with now briefly pausing the security and the ethics dynamics of it, even though I think we’re getting a long way with synthetic data and a data mesh environment that has more localized data controls at play. But what if you could take that and share the data around student help seeking behavior, build a model of what happens when a student asks for help, how you responded, and the 6, 12, 24-month impacts of that help behavior. We should be able to solve and be much more intelligent and personalized in our addressing of those needs to the student. In the process, those universities work together.

0:33:27.6 GS: The data consortium provides centralized technical support in the data environment. And universities, in the meantime, learn how to build models. They learn how to work in a data-centric environment. They get involved in MLOps, AIOps/practices. They do deeper dives into a range of different technical components than they might be able to do on their own. And they basically lift their game to kind of meet the opportunity of AI. So that’s the intention behind the data consortium.

0:33:54.1 PL: And Sally, you put up a slide that showed kind of a web of data across universities. So George is right. We have a lot of data. It’s usually a mess.

0:34:00.8 SA: Yes.

0:34:01.5 PL: Sits in silos, it’s fragmented, et cetera. So the idea is to build tools that allow institutions who are members of the data consortium to both understand and structure their data better, share it, but through a synthetic layer so that you can protect privacy and security. And then collectively have data and then build tools that can be shared among many and lift the sort of knowledge and capacity of all, especially smaller institutions. It might have one institutional researcher. It can’t live with us. So, it will live at ACE. It has some initial funding from a foundation that we can’t name, but it rhymes with mates. [laughter] And we hope to… The technical white paper that George alluded to is really. So, your technical people, your CSOs and your IT directors can look out and say, yeah, this reassures me. Our data would be safe ’cause really take a snapshot of your data.

0:34:52.4 PL: You’re not sending your data anywhere. It’s not to a data warehouse. That’s kind of yesterday’s architecture. And ACE hopes to launch this in 2025 and have it staffed. So we’ve spoken with universities globally, and if everyone is put there, and other organizations. Everyone who says they’re interested stays in, we’d have 40 million students represented out of the gate. So FutureLearn, 6 million students, ETS, 8 million students, ACT, they’ve all said we want in on this because they see the part. There’s no loss of competitive advantage. But you get all of that insight and all of that data that you can build. And ultimately, if we don’t do this as a sector, we’re going to be working with commercial tools from people who don’t build for us. They’re not training. ChatGPT is not trained on higher ed.

0:35:43.3 PL: It’s not trained on university students. And we want to be really specific about the kinds of tools we build and make sure they’re really well designed for who we serve. My big bet, I’m not sure the whole team is convinced. I’m trying to get George there. Is that glasses will in fact be the way we interface between physical world, our own sense of self, and LLM. So I think we’ll use it for assessment. I think we’ll do, we will, there’s just lots of ways we’ll engage using them. So if you know the Institute for the Future here in Palo Alto, it used to be Xerox PARC. And they have this notion of artifacts from the future.

0:36:16.5 PL: So things in our current world that are failures, but that actually when we look back, we’ll say, Oh, that foretold. So I was one of the nerds who bought Google Glasses when they first came out. It became like the universal symbol of nerd. But I think Google Glasses actually predict the future. And if now we, George and I both have our Ray-Ban Metas. I don’t know if you’ve seen them yet, but they look like Wayfarers except for the camera lens and the touchpad and the buttons. And you can look at things and say, what is that building? And it will tell you.

0:36:45.3 PL: And you can instead of if you saw the OpenAI announcement where they were showing how you could do math and they were holding their phone, like, why would you want to do that? I want both my hands. I could just turn my glasses on, right? So I think we’ll do lots of real-world assessment using Google Glasses as an interface, one of the ways that we’ll do assessment and interface with the world. So I think that’s it.

0:37:05.2 SA: Thank you so much.

0:37:05.3 GS: Thank you, Sally.

0:37:07.6 SA: That’s great.

[music]

More Podcasts

Podcast

WVU President Gordon Gee on the Future of Universities

As a land grant university and economic center of the state, West Virginia University has taken a leading…
Podcast

Will Biden’s American Rescue Plan Help Rebuild Higher Education?

Three EAB experts sort out the details of the latest federal stimulus package and share tips to help…
Podcast

Best Ways to Spend Your HEERF Dollars

Roughly 70% of all HEERF dollars remain unspent. Guests discuss the reasons and share advice on how to…