The AI University: Disruptive by Design
Hello, everyone, and welcome to Infinite Stream. This podcast explores the current state of artificial intelligence, where it's headed, and the radical future that powerful AI could create. I'm your host, Kyle Boleyn. Today, I'm joined by Scott Latham, is a professor of strategy at the University of Massachusetts, Lowell, and a leading voice on the future of higher education. With a background in the high-tech industry, he brings a pragmatic lens to organizational transformation and innovation.
Speaker 1:His recent essay, are you ready for the AI university, was published in the Chronicle of Higher Education earlier this month, and it challenges academia to confront artificial intelligences disruptive potential. Latham's work overall urges institutions to rethink their models in the face of accelerating technological change. In today's show, Scott and I are gonna talk about the current state of AI in education and where it's going and the different ways that it could challenge the status quo and potentially even create a new one. Scott, thank you so much for coming on the show today.
Speaker 2:Kyle, you're welcome, and thank you for the opportunity to continue this discussion.
Speaker 1:So, Scott, to kick things off, could you give our listeners a quick overview of your essay for those of whom who may not have read it? I'd love to hear what ultimately inspired you to write it and what are some key ideas you're hoping to get across?
Speaker 2:Yeah. So let me give you a just a very quick background that I think is important. Most of my research looks at organizational decline. And from a professional standpoint, I personally experienced two waves of disruption. The first in geographic information systems, was with a startup that did naval mapping and GPS integration and really made paper charts antiquated.
Speaker 2:And then from that, I went over into the ecommerce side of things and saw the disruption that the Internet presented. Went off, got a PhD, and my PhD looked at the .com boom and bust and the creative destruction associated with that. I have a morbid fascination with decline. I just do. And how that process occurs, who makes it, who doesn't.
Speaker 2:You know, we're in that we're in the midst or on the front end of that happening in higher education and AI. Having been really shaped in the .com era, I am not someone who is bows at the the the altar of technology. I'm not. I saw enough snake oil salesmen and snake oil saleswomen in the early o's to recognize that often technology does not meet the promises that are put out there, whether it be by the venture capitalists, the startups themselves, the IT folks, the systems integrators. You know, you have to kick the tires.
Speaker 2:And so a year ago with AI, I was really again at that point of skepticism. And I wrote an essay in Inside Higher Ed that basically said faculty, AI is not your friend. And at that point in time, I started to see that the threat or the opportunity that AI represented for higher education. That essay was, you know, it's met with a fair amount of fair fare. The Chronicle reached out and said, you know, we really want you to go deeper on this essay.
Speaker 2:And so that essay over the last six months with the editorial help, of the folks at the Chronicle published in April, Really, as you said, was called, Are We Ready for the AI University? And it was an evolution of my thinking. And my general thinking is that higher ed, not to get too theoretic, is in a process of creative destruction faced by artificial intelligence. The current business model in higher ed is broken in two important regards. First, the underlying economic basis cannot be repaired.
Speaker 2:The economic basis in higher ed is very simple. It's based on economies of scale. You build enough labs, you build enough dorms, you make a dining hall, and you basically amortize the costs over an increasing number of students. It was broken before the demographic cliff. It's even worse.
Speaker 2:Now the international enrollments have exasperated the situation. The Trump issues with grants have made it even worse. So the economic basis for the current higher ed business model for the last hundred years is done. And then the second one has to do with the value. And the value that students perceive that they get from higher education is in tremendous discord with their day to day experiences.
Speaker 2:You know, the current 20 year old, if they wanna watch a movie, they go on Netflix. If they wanna get something to eat, they go to Grubhub. They go to Amazon. They live in a incredibly transactional environment, and higher education is, again, out of discord with that. You know, higher ed I teach a class Tuesday, Thursdays at 09:30, and twenty five, thirty kids show up.
Speaker 2:They don't show up, then there's issues around learning. To think that's gonna hold into the future, given AI, given online platforms that have been around for the past twenty years, it's broke. And that process is called creative destruction. And so what does it mean? Because at this point, there's no going back.
Speaker 2:And I think for students, AI is going to be an incredible vehicle for realizing a deep, deep experiential learning experience through agents. I think it's gonna free up innovative faculty to get more engaged with students. I think it's gonna allow them to better balance their hectic lives. I think it's gonna reduce stress and anxiety. I do.
Speaker 2:I think it's gonna be a panacea for a lot of the problems that the average college student experiences. Faculty, they're going to bear the brunt. It's going to be not a war of disruption where, you know, AI is going to come into the classroom and kick their ass out the door. It's going be a war of attrition. You know, I'm 55, that's a relative data point.
Speaker 2:Probably worked till 65. As I say in the SI, a senior faculty, a full faculty member at this point in the academy, they can pretty much patch together an existence over the next decade that will look like what they saw when they first came into the academy. But if you're a junior faculty member, not tenured, assistant or associate, you're gonna have to learn how to work with AI. You just are. There's no two ways about it.
Speaker 2:If you're adjunct visiting or tenure track, you're done. I'm sorry. This is gonna offend a lot of people, but this is my perspective. It's one perspective on this. AI will displace faculty, and over the next few years, it's gonna be graduate assistants, research assistants, adjuncts visiting, and nontenured track.
Speaker 2:And then it's going to just spread throughout the institution at every level, enrollment, registrar, provost will use it to make better decisions. And a lot of what I'm saying here, and it's out of the scope, but read the piece, is based on research. It just is. It's based on my experience in the industry, my own research. I have had funded research by the Department of Labor.
Speaker 2:I recently had a Department of Defense grant on AI and decision making and the broader realm of AI research that's out there. I'm not alone in this thinking. And so, yeah. So that's the general sense of the essay. And then the last thing I would say, Kyle, that is really, really important.
Speaker 2:Sadly, I think over the next half dozen so years, you're gonna see an emergence of AI haves and have nots, and that gets to the decline. The rich get richer. It's very simple. Whether it be in an economic basis or knowledge basis, and in AI, that's gonna play out quickly. We're gonna have institutions that make big investments, big bets in AI, and it's gonna affect their student experience, it's going to affect their research, they're going to be more efficient organizations.
Speaker 2:In the smaller and regional colleges that don't have the resources to make those investments, they're going to be left behind. And the sad thing is, is the students that are there need it the most.
Speaker 1:I think that was an excellent recap. While you were talking about your previous experiences and reflecting on the the .com boom and everything that came after, I think in the past couple of months, what I've been reminded of often is that all of the sudden in the late nineties, college students started downloading songs from the Internet. And then for many years, they tried to put that genie back in the bottle, and they tried to ensure that those students would pay for those songs a la carte. Eventually, after lots of different licensing deals and negotiations and eventually the iPhone, we got to a system where you could essentially pay a subscription fee and get unlimited music. But I think what drives the point home a little further is that now we have tools that can make songs within a few minutes that are increasingly becoming studio grade.
Speaker 1:I'd be curious based on your observations of some of the other disruptive periods we've seen such as the music industry, what are the things that kind of ring true to you in terms of what could happen in the next five years in higher education?
Speaker 2:There's one thing that's occurring that's a little different than back twenty years ago is that I am seeing from the people that I've talked about, there's a significant shift from buy versus build, and you would probably know that from a technologist. Most institutions going to build their own applications. You see it happening at Michigan, and they're going to go after in the next five years, what I like to think of the low hanging fruit. We all know the AI model. There's data, there's an algorithm and a heuristic, and there's a product or an outcome of that.
Speaker 2:And so I think in the next few years, you're going to see the low hanging fruit as the registrar's office and doing scheduling. That's a pretty standardized process and it has a lot of inefficiencies. The vast majority of institutions are still using Excel spreadsheets. You know, they have Monday, Tuesday, Wednesday, Thursday, Friday. These are the time slots.
Speaker 2:And then they haggle with faculty and department chairs and deans on how to best use this and do this capacity utilization. It's absurd. Even at my own institution, I just saw an Excel spreadsheet where they're trying to back into. So I think in the next five years, you're gonna see largely efforts on what I would call the efficiency side of things. And it's where AI can entirely replace a process.
Speaker 2:And then over the next few years, Kyle, after that, and I think as they get more comfortable, you're going to see them start to fully integrate it into the pedagogy. I think the classroom, that's the big nut that they wanna crack. As I say in my piece, and this data is entirely supported, between a quarter and a third of an institution's cost structure is faculty driven, and they need to get to that with AI. And so I think that motivate that motivating factor is going to see AI come into the classroom, largely from a cost perspective. And you'll see out of the gate, you'll see if you think an average department, my department, for example, we have a chair and we have about 10 instructors.
Speaker 2:The chair manages those 10 instructors. Over the next year or two, and I know this to be true, you're going to see that chair pull one of those human instructors and put an AI led class into the schedule. And that is going to be an incredible earthquake on most campuses. And they're going to pair that person with a lead, a human that can monitor it and work with the AI to make sure that there's no hiccups. And that's gonna start to happen, I think, over the next year or two as well.
Speaker 2:But I think over the in the short term, it's gonna be largely efficiency driven and looking at administrative functions. And then soon after that, you're gonna see that the pedagogical effects of AI in the classroom.
Speaker 1:Well, and to your point there, I feel like even if you look at the frontier voice models, for example, perplexity can already be spoken to in real time, and it can cite up to 10 sources while you're speaking to it. And then you have things like Gamma AI with a basic prompt that can drill out 15 slides, 40 slides in a matter of seconds. Once you combine those two together and reduce the latency, it really does become a version of personalized education really quickly, and it doesn't take much imagination beyond that to see, well, based on what the student says and how they respond, we can kind of assess their learning and progress them down different learning outcomes.
Speaker 2:Yeah. I would say two things on that, Kyle, and you probably appreciate. The first thing about you over the past year, I've heard this. Well, how does AI scale? How does AI scale?
Speaker 2:How does AI scale? And that issue isn't relative. I'm more concerned with how the professors scale because that's where the salvation for most professors is. Because the AI is going to in effect, they're going to be the holder of the knowledge creation. I was talking to a colleague of mine yesterday, and I was trying to explain to them, you know, it takes you a month to do a literature review with any topic.
Speaker 2:Say it's prospect theory, for example. It takes you a month to go out and find, you know, what was written from Kahneman and Tversky all the way to the future. AI can do that literally in minutes. Minutes. Okay?
Speaker 2:And so and if a student has a question, it can certainly answer that question. So the exposure to the topic, the exposure to the discipline knowledge will be done by the AI. You're gonna be the person, and this is where I mean when the scale is how do you help them expand it? Expand it? How do you help them take that theory and go out and work with a nonprofit?
Speaker 2:How do you take that thinking on art history and help take a student a group of 10 students to the museum? There's gonna be a shift, I think, that's gonna offer some opportunities for some professors that, okay, I'm not the one standing in the classroom clicking PowerPoint slides. There's no more value in that. Not only that, but clicking PowerPoint slides that are maybe a year or two, five years old. That model is dead.
Speaker 2:It is dead. You have to figure out what is the faculty's job in this new age? That is one of the questions that I pose in the piece that that's what I'm working on now. What does that new faculty job look like? I was talking to some folks at Carnegie last week about this.
Speaker 2:And so that's the first thing I would do is, you know, the issue isn't how AI scales, it's the issue is, you know, how do humans scale? And then the second thing that I would say is how do we incent faculty to be part of this creation? You know, there's been a dramatic shift in the social media narrative, both on X and LinkedIn, where I think finally faculty are are beginning to understand the implications for their jobs. Over the past year, up until this past couple of weeks, Kyle, you've seen faculty like, how do I design assessment to control, combat, and defeat AI? Forget learning.
Speaker 2:I just want to defeat AI. That's the goal of my assessment. And if you step back, it's incredibly perverse. So you're not really caring about learning. You just wanna defeat AI.
Speaker 2:You wanna catch AI in effect. And I've seen this shift now where we're like, alright, you know what? We can't beat AI. So let's get back to assessment and how the individual learns and how we make sure that they've learned, I. E.
Speaker 2:Assessment. We have to do a better job of getting faculty engaged with that. I think that's one of their new jobs. When you asked about how I see this wave of innovation occurring given the waves that we saw with the Internet, I do think this one is really unique because of the human component. Let me draw a parallel.
Speaker 2:A few years back, I was on a grant with the Department of Labor, here in Boston. There was a nuclear plant shutting down, and the grant was how to help these nuclear engineers pivot. I mean, nuclear engineers, Kyle. I mean, they're they're brilliant people. And one of the sentiments that came across talking to this talking to these folks on the Department of Labor, Grant was, I can't believe I'm 45.
Speaker 2:A doctorate in nuclear energy, and I need a new job. And that is happening with faculty. These are people that got into this because they love their discipline. They love the curiosity aspect of it, the stability of the job. They loved being faculty, and now it's incredibly disruptive just like those nuclear engineers.
Speaker 2:And so for me, that's what I'm looking at now. What is the what is the next generation faculty member look like? You know, I have some thoughts.
Speaker 1:Something that I've spent some time thinking about is in a world where people can already generate a 60 page report with a hundred citations using deep research, what does it ultimately mean to call that down to the 12 slides that matter to communicate to some type of decision maker? What point that you're trying to get across. Because I think if you look at the workforce and a lot of different knowledge work jobs, you're often dropped into a situation. There's some type of information mass or complex problem, and you have about two to three weeks to figure it out and make sense of it and present some type of action plan for what you should do next. And in many ways, AI allows you to accelerate from insight to action more quickly, but no one wants to read your 60 page deep research report.
Speaker 1:What they want are the 12 slides that matter.
Speaker 2:Sure.
Speaker 1:And I'd be curious to hear your thoughts on that.
Speaker 2:That is actually one issue I have, and my issue isn't the creation of the 60 page report. It is that there is the 60 page report and the distillation into the 12 slides. The old way of doing it is there was a level of, you know, saying distillation, but there was a level of fermentation of thought. You know, subconscious parallel thinking that happens when you're out walking the dog, when you're raking leaves. And I I wanna make sure that's not being missed.
Speaker 2:We have spent so much time from a research perspective and my degree is in strategy. It's a subfield of economics and probably the biggest shift in my field over the past decade or two has gone from this notion of rational man, rational woman to the notion that we're not rational. And how if that's the case, yes, AI might provide 60 pages, 12 slides, but the outcome of those 12 slides are you're dealing with people and AI is more often than not going to provide a highly probabilistic analysis of a certain dynamic. That's all well and good, but I think when there was some slower thinking involved, there might have been a little more, I don't know, whether it be collaboration, steps along the way that made you consider things differently. One example I would tell is that I have been since 02/2009 attached to our medical device incubator at UMass Lowell and UMass Worcester, the medical school.
Speaker 2:And I've dealt with a lot of startups. And given I'm on the business side of things, I've helped them vet business plans. And I will ultimately look at them and say, what's the use of a business plan? And they're like, oh, raise money. And I'm like, no, it's to test your own thinking.
Speaker 2:It's really to cause you to pause and better understand, you know, what's your value proposition? Who are the key people? What's the compliance process? What's the regulatory path? These things are things you need to reflect on because typically the people you're asking money for, they know the space.
Speaker 2:They're just as smart as you. And I think if we go from 60 pages to 12 pay, 12 slides, what are you losing by not having that middle ground? That that's what I am concerned about there.
Speaker 1:Yeah. Yeah. Let alone, like, how would I phrase this? I think whenever you think about more powerful AI and what it could create, there's this aspect that it still needs to understand the dynamics of the different relationships, let alone the kind of larger technological societal business trends that are happening, and it can only offer a certain dimension of reality at this point in time, and it can't always incorporate those things. Certainly, as you've seen, there are smart glasses coming out.
Speaker 1:There's AI pins and other stuff that can just record all of your conversations and perhaps integrate those insights into whatever findings it's trying to present and research. But like you're saying, there's that moment of truly stewing on something and thinking about how does this reflect your worldview, your experience, what you're observing in the industry. And if you go from, like you said, 60 pages to 12 slides without that stewing in the middle Yes. Let alone your own human touch, you lose a lot along the way.
Speaker 2:I think you may. I really, really do. There's no pause in the work. You know, there's no consideration of process. And I think the knowledge creation with AI can just it it just happens too quick.
Speaker 2:I'm not someone as I say in my piece, you know, the biggest issue I have with AI is, you know, spread throughout higher ed is the whole last year, Kyle. You must have seen it. You pulled these podcasts together. All everyone wants to do is talk about cheating. That's just not that battle's been lost, and it's been lost for years.
Speaker 2:Students have cheated for decades. You know, they've written, you know, notes on their hands. They tape notes to their ball caps. They'll put things on notebooks and slide them under the chair in front of them. They'll look to the left.
Speaker 2:They'll look to the right. You know, and if you think about it from a cheating perspective, the reason cheating is an issue is because there's just no learning that's involved. So they're gonna use AI. It's gonna accelerate the product, but what's the implications for the learning? But the AI is not going away.
Speaker 2:So that gets back to some of my comments around assessment earlier. So
Speaker 1:Mhmm. Mhmm.
Speaker 2:I mean, I'm sure you've seen, The New Yorker this past week, everyone cheats, you know, and the quotes and the anecdotes from students. Yeah. It's just yes. It's there. Okay, folks.
Speaker 2:Now what do we do?
Speaker 1:I think that's the side of it that we're still waking up to, which is that AI has improved a lot in the past two and a half years, and it's going to improve over this summer with the release of chat GPT five, the introduction of the o three reasoning model to a lot of people. And I think like we haven't really even truly adapted to the last wave of AI, and now the the next one is coming and we don't fully understand the implications of that yet.
Speaker 2:We don't. And the other thing too, I think part of the reason is you said we're kind of one step behind. You know, I I will talk to faculty and administrators and they'll talk about, oh, about AI hallucinations? And, you know, what about the AI misinformation and this and that? So they're judging the future of AI's potential on the current capabilities.
Speaker 2:And I say to them, I'm like, do you remember in, I don't know, 1998 logging on to AOL and your modem would be in the corner and it sounded like it was popping popcorn. I go, and now it's the basis for the global economy. Again, if you had taken into consideration that clicking clacking modem in the corner of your living room and then think about Amazon and Google, you'd have really missed the boat. Same thing with AI folks. I'll share with colleagues.
Speaker 2:I have an email that I'll send around internally. They're like, well, AI could never teach my class and I'll go, oh, I'll list 10 things that AI is doing now from drug discovery all the way to launching rockets. But no. AI couldn't stand in front of a a sly class and click PowerPoints. Hey.
Speaker 2:Come on, folks.
Speaker 1:I think the challenge is that we all invest so much time and energy into becoming whoever we wanna be, and then we we settle into that. And especially if we've been in a particular role for a long time and we've seemingly seen other technological changes come and go, we can kinda get convinced like, well, we'll be fine through this one. You know, there's a lot of hype behind this one and that's what I hear a lot is that while it's kinda hype, they're just juicing their stock price or whatever. But if you are actually in the trenches, you see how quickly it's coming.
Speaker 2:You do. And again, I that's why I made some of those comments. You know, I'm very cautious about what the words I write. Being here with you, I was flattered when you reached out, but I am not bowing at the temple of Elon Musk. I'm just not.
Speaker 2:And or take your pick or Sam Altman, but it's coming and it's coming quicker than we would like probably. I mean, any system only has so much capacity for change. And the problem with higher ed, Kyle, is that we'd already been under siege. You know, the online learning platforms, COVID, the demographic cliff, the international discord, and now AI. Well, I'm sorry.
Speaker 2:That's just how it's occurring. There's no way. So let's deal with it. You know, let's have a a discussion because it's not going away, and it's only going to get more pronounced in its, you know, impact. But, yeah, I agree.
Speaker 2:I really, really do, and I do hear some of that as well, which is frustrating on campus. In the New York Times a few weeks ago, there was a columnist that called AI a middling technology. And I don't know if you saw it, and she's a professor down in North Carolina. And she made a very compelling case for why we should be concerned about AI, and then she pivot dramatically and politicized it. That basically AI is this grand scheme of the technocrats.
Speaker 2:I mean, come on people. That's just not the case.
Speaker 1:I think, like, what what's hard for people to kind of fully understand in this period of of creative destruction is that the changes are already happening right beneath our feet, and a lot of people are starting to ring the bell really loudly. Like, for example, Kevin Roose in the New York Times, Ezra Klein. I even know Thomas Friedman had a whole column about it. Yeah. And you have to kind of wonder, like, at what point with people trying to get the message out and trying to explain that this is important, that it's disruptive, that people will actually take the message seriously and start to try to figure out if these things are true, if these people are right, what does it mean for the way that we currently do business and how do we get in front of it rather than trail behind it?
Speaker 2:No. I I think that's well said. And, you know, all the studies, they look at communication and when you have to communicate bad news or dire news. I think they've even done some studies around climate change is that it really is curvilinear that, you know, when you start to tell people, hey, there's something happening, there's some change here, we need to, you know, have a response. And at some point, it tips over and at that and then they stop listening.
Speaker 2:And then they get to the point of, you know what? Okay. I'll just deal with the consequence. And I think that's going to contribute to that AI have and have nots because I think a lot of institutions and, you know, I I talked about in in earlier on this, there is a level of AI paralysis that people like have said, alright. Well, we'll just see what happens.
Speaker 2:Here's the issue. I think there you can't blame them. If you look back, Kyle, and again, I'm a little older than you, but if you talk to some folks, the last two major technology driven disruptions in higher ed, if you go back twenty, twenty five years ago, was online or learning management systems or learning management platforms, largely driven by Blackboard, Canvas, take your pick. And so you could go out and basically pull one out of a box and next thing you know, you're online. And around that same period was PeopleSoft on the ERP side of things that similarly allowed you to automate internal processes.
Speaker 2:They're waiting for some AI vendor to show up and then, you know, it be Salesforce or OpenAI to go, hey, here's your AI strategy, here's your AI story, and that's just not gonna be the case this time. And so I think that AI paralysis is caused by that. Alright. We're just waiting. We're waiting.
Speaker 2:And I you know, listen. UWC, Accenture, they're all chomping at the bit to get on campus. I mean, the billable hours that are gonna be associated with this because I think it largely will be build your own. It has to be build your own if you wanna have any kind of competitive advantage. There was an article in MIT Sloan manager review about two weeks ago.
Speaker 2:It was written by, and I apologize to this individual, an incredibly prolific AI researcher and Jay Barney from my field and strategy. And their point was, listen, if you're gonna roll AI out of a box, it's not gonna give you any competitive advantage as an institution. It's basically just going to be plumbing. And, you know, the same way plumbing is the same in every building, you're not wanting to work to AI being the same in every building.
Speaker 1:I think, like, what people haven't fully kinda wrapped their heads around is that the the frontier AI labs are certainly in a race to basically be the winner both of the market and of the most powerful model. But so too is every university in the country to establish their AI strategy and every possible technology startup that can try to figure out how to take their slice of this bigger pie. That's an inevitably going to create a situation of winners and losers. But as this happens, search itself and how people find information is changing, and simply buying the top blue link in Google is not how people are gonna decide about colleges anymore. And I think that is also coming for us at the same time in terms of the personalization of AI, all of these stored memories.
Speaker 1:And pretty soon an eleventh grader is going to ask, what's the best college for me? And I think they might get a very different answer.
Speaker 2:Yeah, Kyle, you're so dead on. I mean, I have seen that. I've been in academia since 02/2005 when I've left industry. You know, for the past twenty years, you've gone from let's get everyone to campus and now it is let's target, you know, a group of folks or, you know, they'll even target a town, but the degree of personalization that you're offering is something that most institutions don't do, but that's very soon to be, you know, what I mean when I say the haves and have nots. You know, it will be that eleventh grader, you know, working with AI with an agent that will basically go out, scan the environment, interact with those institutions' agents, and you're gonna get a a really diverse spectrum of institutions delivered to that individual, ones they probably wouldn't have even considered.
Speaker 2:And then here's where the other thing comes in. And, you know, you saw the the debacle with five star two years ago. The level of tailoring on the financial package that is gonna go on with that eleventh grader is gonna be something we can't even comprehend right now.
Speaker 1:Also worry a lot about this kind of dynamic, and I think it's one you've been hinting at, is that we we currently live in a world where the the digital divide is going to turn into the intelligence gap, and there's going to be an entire cohort of people who are still thinking that, well, this is kind of just a a better Google or maybe it's like a word doc that writes itself. Whereas, like, other people in other universities are saying, wow, I can vibe code the prototype to my startup, and I can raise venture capital for my ideas. And then maybe other people even a little bit further along saying, well, what does it mean now that agents can automate things and automate work? And I just feel like while there are AI literacy efforts on a lot of campuses and many efforts to try to figure out what it means to put this education in the hands of people, think we're gonna start seeing a a real chasm open up across the board.
Speaker 2:I agree. And I have said, I think this is a prime example, and I speak to this in the essay. This has to come from the humanities. This has to come from the social sciences on how we train individuals to understand the issues around information literacy, the implications of that information, how to identify misinformation. That is nothing that's going to come out of engineering business or the sciences.
Speaker 2:It's just not. And so, yeah, I think that might close that chasm, but that has to be happening now. It really does. Otherwise, you're gonna get a you're just going to get a cohort of individuals, and people have talked about this, that don't know how to do information, you know, and they certainly won't given how it's just the confluence of information from all the different areas, whether it be higher ed or the media or entertainment. You know, there has to be some sort.
Speaker 2:We have to better equip the college age student. I agree.
Speaker 1:Well, and I think there's there's also a piece of this that is maybe not natively in most people's workhouse, which is just the idea of strategic foresight and trying to think about if this is where things are now, where might they be two to three years from now based on current trends and what are the implications that could have? And I'm sure you've probably seen the studies on meter where the agentic task length is now doubling every seven to eight months. Some people suspect based on the measuring of o three that it's actually closer to every three to four. And what that essentially means is that we're moving from a world where an agent can work reliably, say, for an hour hour to a world where in a few years, maybe dozens, if not hundreds of them can work for four hours or eight hours. That is going to have lots of implications for so many things.
Speaker 1:And I feel like most people are still thinking it's about prompts when the next wave is coming.
Speaker 2:Yeah. I think the agent wave, I said it in the piece, you know, in 02/1930 or it's five years out, you know, the average college student is going to go to school and instead of leaving with a University of Maine t shirt, they're going to leave with an agent. They really will. And you know, that agent will be the first block in their lifelong learning experience. And so that's what it looks like.
Speaker 2:I guess then the question is, have a pretty strong idea of what the education looks like. An agent driven education will be incredibly personalized and accessible. The next question is, what are the jobs look like? And did the jobs look similarly like that? And I don't know the answer to that.
Speaker 2:I saw a quote, I think on LinkedIn and it was it spurred a lot of discussion or I I saw an inquiry on LinkedIn a couple days ago and it was a parent that basically said, you know, I have a six and an eight year old. What do their jobs look like in sixteen years? And that's a reasonable question. If you took when I was eight or maybe even you, Kyle, I mean, I think our existence still looks the same. It's not going to for these folks.
Speaker 2:So this gets to your previous point around how do we better prepare people for this era of information and that is not happening right now. Right now, as you said, everyone's concerned about prompt engineering and how do I work with prompts? How do I write an effective prompt? It's gonna be a valuable skill, but, you know, moving that's gonna that skill is going to be very baseline, very quickly foundational.
Speaker 1:One thought experiment I I did recently that I would be curious to hear your thoughts on is I thought about how today's eleventh or twelfth grader is starting to make choices about what they wanna study, who they wanna be, and perhaps they are starting to do research into colleges. So you add two years to that, that puts us in twenty twenty seven ish, '20 '20 '8. That's midterm for when a lot of people are saying even more powerful AI is coming. You add four years on top of that, They're now working on their degree, their diploma, trying to get into their field, and then you have the other side of that, which effectively puts us into early twenty thirty. And the whole idea is that I feel like there's kind of this, like, eight year reckoning happening where we don't really have the proper anti literacy in place in k through 12 yet.
Speaker 1:Most universities are kind of all over the place in terms of how AI is integrated into their curriculum. And meanwhile, the world is gonna change a lot while the students are there, and we even we don't know what jobs necessarily are gonna be on the other side. And it's almost like you need to help people kind of think, like I said earlier, two to three years ahead, but also to be regularly informing themselves about AI and where it's going. Otherwise, we're potentially building a bridge that may not lead to where people wanted to go once they thought they get to the other side.
Speaker 2:Yeah. You know, I think most of my experience has been in IT and the life sciences and the incubator over the past fifteen years. The very reflexive response is, oh, we want to push young folks into STEM. Sure. I guess.
Speaker 2:You know, there's going to be a and I did some research back in 1819 with a colleague of mine, Professor Brett Humbert here, where we looked at automation in jobs and what types of jobs would be durable or disrupted or displaced. And I want to go back to that because it really I think, as I said a year ago, I started thinking differently about AI and the pace of change. I think in the time frame that you just outlined, Kyle, I think there's going to be, you know, you talked about AGI and I don't know that depending on I know I'm bouncing all over this, but this is big question. I think the next generation will be okay. I think the way they work will be foreign to us, and they will work in a world where AI is a true partner.
Speaker 2:If I go back, and I don't wanna make this personal, but I'm hoping it makes the point. I live down the street from my father. My father was a carpenter growing up. He worked in the trades, and he had a life for work existence that was pretty well known. You know, got up at seven, on the site 07:30, lunch 11:30, home for supper at five, and that's how he worked.
Speaker 2:And some days he worked Saturdays, very rarely Sundays. And he had a crew, and that was his job existence. I'll work. Like, if my dad came down to my house right now, Kyle, and I told him I was working, he'd be like, What what do you mean you're working? You're talking to a guy on the computer, and it's not a blue collar, white collar thing.
Speaker 2:It's not a a a services industry versus the construction industry thing. It's just there is, again, I think, an evolution of work that occurs that the current generation or the previous generation has a hard time comprehending. For better or worse, Kyle, you and I are the current generation. I don't think you and I, from our perspective, we might guess, we might surmise what my child's work looks like, but I don't know. But I do think to say AI is going to be central to that work.
Speaker 2:Yes. No. No kidding. But I mean central in a way that they're literally partnering with AI at every aspect. You know, I think you're gonna have very quickly and as the agentic stuff evolves, you're gonna have kids in their twenties and late early thirties go in, log on at 08:30, and say, alright, Hal.
Speaker 2:Let's just take 2,001 of space. What's on the docket today? And that HAL is gonna be like, well, Mary, we were working on the, project budget for the new bridge, and unfortunately, the price of steel has gone up. I've done the models while you were sleeping. What do you think?
Speaker 2:And they're going to look. And even though Hal will have a degree of intelligence and information search that will make Mary she couldn't even comprehend. Mary might have a consideration and say, well, have you looked at x or y, or could we look at another thing? And then she'll send Hal off to do the work. They'll come back.
Speaker 2:That's, I think, more and more what we're going to see, and that's what it's gonna be. And and so I think to your point, I think AI literacy, understanding how AI models work, how the large language models work, how the probabilistic models work, and then helping folks understand now that 14 year old so that when they're 24, they can go into it. There's going to be some pain, I think in higher ed and, you know, I'm sure you saw it, you know, PwC, they just laid off 1,200 people last week and in that statement they're like AI. So our Chegg just announced that they were laying off 20% of their workforce. Reason, AI.
Speaker 2:Now there is part of me that is like, okay. Are they just using AI to, you know, call? I don't know. And if I I read both PRs and different news accounts, I don't think they are. I truly think that AI is going to make some jobs just obsolete.
Speaker 2:You don't need them. And so I deal with this every day. So I teach two type of populations, Kyle, here. I teach strategy at the undergraduate level. They're 22 year olds.
Speaker 2:They're about to graduate. I deal a lot with them, and I do MBA students. And they both the MBA students are look typically for a career pivot, and the 22 year olds are looking for their first job. And they'll in Boston, they'll go work for Putnam, or they'll go work for State Street in the financial industry or Fidelity. And I can't help but wonder, given what we saw last week in the Atlantic talking about the job shortage for graduating seniors and hiring managers basically saying, well, we're using AI.
Speaker 2:That's on me. That's on me. That's on the professor. That's on the institution. And that's one of the biggest problems I have and supports your contention, Kyle.
Speaker 2:There isn't enough going around. Hey, we're going to teach an accountant. Four years out, see you. Accountant, four years, see you. It's like some industrial revolution process where we're just churning out students and the expectation is on the back end, there'll be the CPA firms that are like, Hey, great, we'll hire them.
Speaker 2:Well, they're not hiring anymore folks, so you're still churning and no one's hiring. So what are you doing that needs to change? And that's again, isn't in a higher ed's not doing that because we're not well suited to do that. So I think AI literacy, you know, if I, you know, I do think it has some of it has come out of the humanities, the thinking, but that that should be central to any institution, you know, and I hear again people degrade it. They're like, we're not a trade school.
Speaker 2:We don't teach kids how to program. That's bullshit. Excuse my language. Okay. Back in the day, I learned Java COBOL basic.
Speaker 2:I could say hello world in about six different languages. We do train students on technical proficiency, but at least let's have the humility to know that this isn't Java, this isn't Python, this is something different and AI is a different world. So let's give them the fluency to go into that world and be able to function. And that that's the problem. So sorry again, went on there a little too long.
Speaker 1:No. That was an absolute excellent answer, and I I had so many thoughts while you were having it. I think the best follow-up might be, you know, building off of some of the the points we made earlier is that Sam Altman recently in front of a crowd at a venture capital firm, I believe, he talked about how the younger people are, the more kind of integrated they are in terms of how they use it. And what he meant by that is that some of them are quite literally using it as this career coach, this therapist, this whole in kind of advisor. And he said, even people younger are basically using it almost like this operating system.
Speaker 1:And all you have to do is extrapolate that a little bit to say, well, if they're already that comfortable with AI now and they ask the question, what should I do? What's the career? What's gonna pay this amount or allow this lifestyle? And they instantly trigger an agentic search that looks through 200 websites. Yeah.
Speaker 1:They're gonna have so much access to information and opportunity in ways that we could have only dreamed of even a few years ago.
Speaker 2:Right. And, you know, that gets again, I think for me, Kyle, you know, there's this notion of Simon Herbert Simon. He was an economist of bounded rationality, and his whole point was his theories were so elegant in their simplicity. And what he was saying is that we can't have full information at any given time, never mind incorrect information. And if we recognize AI is helping us not overcome providing a crutch in dealing with bounded rationality, we can make better decisions.
Speaker 2:And, you know, we can deal with the irrationality that we see in human based decisions. We can get, I know objective because I just saw something yesterday where AI was being used to provide subjective answers in higher ed. I see it. You and I see all this stuff all the time and I forget where it comes. But that's, I think to your point, how do we use AI to help transcend or deal with the whole notion of bounded rationality to make better decisions?
Speaker 2:You know, there's AI is gonna help a lot. Like, if you and I wanted to start a bike company tomorrow, AI could have us do that. It could help us source all the parts. It could find capacity for manufacturing in the Far East, and you and I could have Kyle and Scott's bike company probably by the May, and that's a pretty powerful thing. So if we look at AI as better abling, you know, how we allocate resources in a society across an economy and it's able to do that.
Speaker 2:That's not a bad thing. The one thing I always talk to people about is around this bounded rationality. The typical person thinks an X and Y, you know, a two by two and then, you know, you fall into one of four buckets. AI thinks like this. It's like an explosion in the way it goes out and gets information.
Speaker 2:Now it doesn't know yet the implications of that information and that's where you can come in. And, yeah, so I think that's what we have to hope for is how to really, you know, what they call it, the centaur model, human in the loop, AI in the loop. There is going to be an level of AI partnering that is just gonna be central to the existence of anyone under the age of 55, I think, moving forward. You know, I use 55 because of me. I'll be able to get through my life and not have AI be central.
Speaker 2:I will. I have no doubt. If I want to, I can, but I won't need it for my work existence. I don't need it for my life existence, but other folks are going to do that.
Speaker 1:I think that's a that's a very powerful point. I'd be curious, like, when you think about where the conversation is in hired right now, And to me, just the absolute overwhelm of information if you're trying to follow AI even on a daily or weekly basis. I think the the challenge is is that people either have their own tuitions, intuitions, and that leads them to certain conclusions, or maybe they're just not seeing the stuff that says this is going to be more disruptive and you need take it more seriously. What does it mean to go out into a college campus and try to spread the world word of, like, it's not just about, like, doing a little bit with AI and pretending everything is going to be fine. It means considering that this could be very disruptive and that we really might need to change things.
Speaker 1:How do you think someone goes about trying to start those conversations and help people kind of open their eyes to what might be possible and could be coming?
Speaker 2:I was incredibly conflicted when I wrote this piece. Incredibly. And largely because I love higher education. I do. I love being a professor, and I am concerned for how it's going to change, but it is going to change.
Speaker 2:And I I need people to hear that. The second thing around conversations on campus, I wanna give you, again, a personal anecdote that illustrates the problem with having these conversations. Higher education is an incredibly risk averse herd like arena. It just is. Most institutional leaders don't want to break from the herd.
Speaker 2:They want to stick together, maybe do something innovative, but for the most part, again, risk averse. I remember being dean of the business school in 2014 and sitting in a room with the other deans and the provost. And at the time, the chancellor and other folks at UMass Lowell in this consultant came in over a decade and said, Well, folks, we have a demographic cliff coming a decade ago and they're like, Oh, demographic cliff. And they then explained it that, you know, in the early aughts and then in 02/2008, the birth rate really dropped. And it's not a tough model.
Speaker 2:Someone having a baby eighteen years ago, that's your college student and they're not having babies, so you're gonna have a problem in a few years. Let's flash forward to 2024, '20 '5 and oh my God, Kyle, there's a demographic cliff. I read more about these demographic cliffs and how you thought they just announced from Armageddon that there's an asteroid about to hit the planet. We've known for fifteen years that a demographic cliff was coming. So to your question, the only way these conversations happen in a risk averse, herd like environment is when the shit hits the fan, and the shit's hitting the fan right now for a lot of institutions.
Speaker 2:But it is at institutions like UMass Lowell, other regional institutions, regional universities, community colleges, enrollments are dropping, costs keep going up, and now people are ready to have those conversations. So there really is the current moment. It is the right time to have AI. The first, when I have conversations about AI on campus and I have to sense the piece that ran two months ago now, I've talked to everyone from system presidents down to faculty, to administrators, registrars, and anyone who reaches out like you, I talk to because I learn just as much. And the number one question I ask, who owns AI on your campus?
Speaker 2:And you hear, crickets, cricket, cricket, cricket, cricket. No one has an answer for it. There are very few. Who owns AI on your campus? Well, our CIO.
Speaker 2:Well, the deans. Well, the provost is doing some stuff. AI is not that. Again, go back to my previous comments. Who owns your learning management platform?
Speaker 2:Oh, the provost. Who owns your, PeopleSoft implementation? Oh, the CIO. It ain't that. This is literally central to your existence.
Speaker 2:AI is existential. If you told me your chancellor owned it, I'd be like, damn, good decision. Your chancellor should own AI for the next five years. If you told me, you know, you hired a CAIO, you know, a chief AI officer, good move. Good move.
Speaker 2:Pay them the $400,000. Let them run your AI. Let them work with the provost. Let them work for the CIO. Let them report right to the chancellor.
Speaker 2:That's the first question. If think about it, any good conversation starts with the question. What do you want for dinner? You know, where do you want go on vacation? Who owns your AI on campus?
Speaker 2:And it is it's amazing to me. In the past two months, leading up to a Kyle, silence, absolute silence. And then you'll hear, well, have some pilot projects going around and the deans are doing something and that's a recipe for disaster. Again, I go back to a lot of the literature and the current thinking, most of it's coming out of MIT, not surprising, Stanford. And one of the biggest questions, not just in higher ed is who owns your AI strategy?
Speaker 2:Do you need a chief AI officer? And yes, you do. You do. This is not a CIO level, CTO level job. It's not.
Speaker 2:AI is not something that can be considered with data and not to be considered with cybersecurity and load management. AI is central to your existence as an institution, and it should be owned by someone solely. So, yeah, I hope that answered your question. But that's the first step in any conversation on campus.
Speaker 1:I think that is, a very profound question to raise, and I think, you know, people listen to a podcast for an hour and they get in their driveway, they come up to their walkway, and you often wanna leave them with something that is gonna echo through their brain for the rest of the evening, rest of the night. And honestly, I think the question that you just raised right there, who owns AI on your campus is such a important parting shot. Is there anything else you wanna say that we didn't get a chance to cover on the show today? Any other final advice or thoughts?
Speaker 2:The other thing that I do when I talk to people about is I tell them to do something very simply. Do an AI inventory. And they're like, well, what do you mean do an AI inventory? I'm like, like anything else, tell me what's happening on your campus right now. And I don't mean students logging on to chat GPT.
Speaker 2:AI is one of those things given the open the open source of the tools, you know, you look at the things out there like, you know, OpenAI just put out there, Hey, how to build an agentic AI instance. This is really a grassroots technology. It's not like a top down ERP platform or a data platform. This is I would imagine if you go out into any institution, know there will be different instances of AI happening. Oh, I'm using AI to write a grant.
Speaker 2:Oh, I'm using AI to develop slides. Oh, I'm using AI to do capacity in classrooms. Do an inventory. Find out what's going on first. Those things need to happen in parallel.
Speaker 2:While you're hiring a chief AI officer, find out what you're doing so then when that person arrives, you can tell he or she this is where we're at. And then it gives you a sense of what things need nourishment, investment, things that are working. What are the pilots that you wanna continue with? And then the second thing is, what are the things that you wanna kill? Like, that don't contribute.
Speaker 2:To my last point, every institution has to build their own AI narrative story. Michigan's AI story is different than UMass Lowell's. UMass Lowell's story is different from Babson. Babson's is different from MIT. Every institution has to have a unique AI story.
Speaker 2:How is AI making for a fuller campus experience for the student? How is it making our faculty more engaging and productive? How is it making, you know, our administrators help make better decisions? And so it it really those three things. Who owns AI in your campus?
Speaker 2:What's your AI inventory? And what's your AI story? And, stop those three things need to be fleshed out. They really, really do. And it will be over the next year.
Speaker 2:You're gonna see that separation next year, Kyle. That's the haves and have nots. That's all. Good news is we live in exciting times as I say.
Speaker 1:We we certainly do. Well, Scott, thank you so much for doing this and coming on the show today. For listeners, if you wanna check out the show, we're available where all podcasts can be downloaded and streams. That's all. Thank you so much.
