J.D. Mosley-Matchett (00:01) It's time for another episode of AI Update brought to you by InformAven. I'm J.D. Mosley-Matchit, the founder and CEO of InformAven. And our guest today is Brian Moynihan, an AI strategist, consultant, and author of the bestselling book, AI Culture Shift. As CEO of DreamSpace AI Solutions, he helps universities and organizations align AI with mission, culture, and operations. to transform administration with a people first approach. Welcome to the podcast, Brian. Brian Moynihan (00:35) Thanks, it's nice to be on here, thank you. J.D. Mosley-Matchett (00:40) Okay, now we'll try it with the alternate. Brian Moynihan (00:43) Mm-hmm. J.D. Mosley-Matchett (00:47) It's time for another episode of AI Update brought to you by InformAven. I'm J.D. Mosley-Matchit, the founder and CEO of InformAven. And our guest today is Brian Moynihan, who as of August, Where did August come from? Brian Moynihan (01:02) Close up! October 27th, yeah. J.D. Mosley-Matchett (01:08) I love it. Okay, let me try again. It's time for another episode of AI Update brought to you by InforMaven. I'm J.D. Mosley-Matchett, the founder and CEO of InforMaven. And our guest today is Brian Moynihan, who as of October 27th is the director of AI and Automation projects at Duke University and the author of the bestselling book, AI Culture Shift. He leads initiatives that bring artificial intelligence and automation to university operations, helping administration align technology with mission, culture, and people-first design. Welcome to the podcast, Brian. Brian Moynihan (01:56) Thanks, it's nice to be here. J.D. Mosley-Matchett (01:59) Your book, AI Culture Shift, is so insightful. My favorite is part two, frameworks to elevate people, because that's exactly what we need right now. But considering the fact that you wrote it a year ago, which is an eternity in AI evolution time, are there any things you'd add to or change in the book if you were to write it today? Brian Moynihan (02:10) Mm-hmm. Yeah. that's a good point. ⁓ I would say when I was writing a book about AI and I wrote it with my co-author Adnan Iftekhar, who's also amazing, you know, people were like, you know, how can you write about AI is changing like every week, there's some new thing that's coming out. That is true. But it's really not about the technology per se. It's about how we have an AI mindset and how we apply it to people and organizations. And I think, as most people would understand to people in organizations move a little slower. ⁓ And I think that that concept of how we take new technology and integrate it with people, with processes, with strategy, with organization, with training, that takes a while. And then there's sort of a special element to it. And so I think the core elements of what are in the book are still key. So some of the themes in there, there's one about what we call IQ, AQ, and EQ. So that's about IQ being more like intelligence, EQ being the emotional side of things, which is key. And then AQ being the action and adaptability, I think that is definitely resonated with people, including actually the ⁓ Mustafa Suleyman, who is one of the co-founders of Google DeepMind, and now he's the CEO of Microsoft AI. So that was an honor of ours to be recognized by him. But also, ⁓ one of the theories that I brought that I'm excited to bring to my future work as well is this concept of innovation networks. J.D. Mosley-Matchett (03:34) You Brian Moynihan (03:46) So this was born out of my time at the University of North Carolina at Chapel Hill, UNC, ⁓ where we were thinking about if we have a new technology, and I've gone through waves of emerging tech, how does it best integrate into the organization? Not top down, not bottom up, but thinking about a network of people. ⁓ And so those are the two core ideas in the book, I would say, with a couple other ones that we added around the chief AI officer, around ⁓ ethics and governance. So I think those are key elements. Um, Since then I've come up with a couple of models. I'll talk about them today. Um, and, um, And I think I would probably bring some of that, like, you know, foregrounding a little bit of mission, thinking about how strategy operations, uh, and tactics blend in with the human side of things. Um, Some of those elements, but I think for the most part, the book, um, holds up and who knows, maybe Adnan and I will, we'll do a follow up to that. It was a lot of, a lot of fun collaborating with him Um, and it's been nice. All the conversations it's opened up. Um, And so that's been good. J.D. Mosley-Matchett (04:36) You That's great. And those are such good points that you make. Now, let's shift to some higher education questions. Many administrators worry about compliance, accreditation, and data privacy. How can AI actually reduce risk in these areas rather than add to it? Brian Moynihan (04:51) Mm-hmm. I mean, certainly it is, it gets both, right? We wouldn't want to say that AI is all good or it's all bad. I think too often we get into these sorts of discussions. And to me, the word balance comes up again and again, and trade-offs comes up again and again. That being said, I think with responsible use of AI, it can reduce risk in number of ways. You can improve consistency, traceability, visibility, and decision-making. And a lot of that actually comes down to data, right? So 80 % of what AI is, especially if you're going to do it in your own organization, is data. And when universities start thinking about, like, what's our secret sauce? Well, why should we, for instance, build something with our development team rather than ⁓ just buy it off the shelf from somebody else? And I think generally the answer is, it's because you've got the data, right? And so it's that connecting the data across those elements that's really important. And of course, there's problems with connecting the data. Nobody wants to give up their data source even within a university setting ⁓ because they're afraid that they'll leak information. There'll be some sort of safety risk. But if you think about standardized processes, ⁓ generally when you ask about an AI question, you're like, well, we've got this process that's inefficient. It's like, OK, let's add AI to it. And then the first question is, actually, we need to say, what is the process? And then generally, you realize that nobody's thought through very well what the process is. ⁓ And then so just even asking that question, even if you had to know technology, would usually improve the process just because people start paying attention to it and adding some logic to it. J.D. Mosley-Matchett (06:19) Yes. you Brian Moynihan (06:32) And so AI is going to supercharge whatever process you have. So you want that process to be good. ⁓ But then you get data governance. So probably AI, because you can do so much with it, because it's built on these things, because it's efficient. You come to understand that you don't have access to the data you thought you had, and ⁓ you don't have a process that you need. ⁓ These things, if you're deploying, should be within controlled environments. So universities can have their own, for instance, Microsoft 365 Copilot license, or the or whatever tool that they're using. There are ways to having these things that are sort of bounded on-prem. And AI can actually help in a lot of ways with human error and bias. and especially when you have this process where it's like the AI and the human working together. So I don't know how many processes would very quickly just move to like full automation. Again, I think this idea of like 0 % or 100 % automation is the wrong way of thinking about it. J.D. Mosley-Matchett (07:22) You Brian Moynihan (07:27) But thinking about how do we like sort of get through the basic stuff and say there's an HR process that like, you know, that you would be sending out a letter to somebody before that is sent. You know, you might get a first draft of that letter and you might be able to pull in some variables from what you know of the person and the position you're talking to them about, things like that. You want to make sure that a human reviewed that on it, make sure it makes sense. But even just having that and the trail behind it can sort of reduce risk in some of those areas. ⁓ and then you can, So definitely, transparency in that human in the loop is key. So I think when designing these things, all, don't know if people talk about privacy by design, ethics by design, safety by design. It's thinking about these things that we want to add, not just at the end, like, we have a product we're going to ship, or this is our process. It's like, you put it at the beginning and think, "How do we, how do we start with that as a beginning?" And generally when you do that, there's a certain through line that helps. ⁓ So, AI by itself, J.D. Mosley-Matchett (08:16) Yeah. Brian Moynihan (08:24) just like left on its own is not going to solve your problems. It's going to supercharge whatever cracks in the system there are. ⁓ But I think, you know, the thoughtful use of it and really thinking it through is a new opportunity to begin thinking from the beginning again ⁓ about these processes. And so I think when I say thinking from the beginning, it's not like you just throw out all your legacy stuff and completely begin again. If you're a completely new organization, you could maybe do that and begin from the ground up with an AI outlook. But I think, you know, having a vision of where you're trying to get, then you can start to... J.D. Mosley-Matchett (08:49) you Brian Moynihan (08:54) work backwards to, this is the legacy system we've got. How do we begin with maybe change management processes that we're already going through, strategic things that we're already deciding to do, places that we see the biggest pain points, ⁓ and then start building towards that vision that you have that might be a few years out. J.D. Mosley-Matchett (09:11) I love it, Absolutely. And the mindfulness aspect that you point out is so important. Okay. Institutions often talk about AI readiness. What first steps should a university take if leadership feels like they're behind on AI adoption? Brian Moynihan (09:17) Mm-hmm. First, start with mission. So, one of my models that I was talking about, I call the MAP model. And it's basically the Mission, AI, and then everything else: People, Processes, and Plans. That's MAP. And I think, you know, to begin with the end in mind, you begin with your mission. Why do you exist as a company as an organization? What do you exist in higher education? And I think that comes down to this idea. Clayton Christensen uses this idea of the job to be done. So Clayton Christensen is famous for disruptive innovation, a term that a lot of people know. And the question is, what is the job to be done? So famously, people aren't buying drills from Black and Decker. They're buying a solution to the problem that they need. They need a hole in their wall, so they buy a drill. But if there was another way of getting that same hole in the wall, or another way of hanging up a painting on their wall, or whatever it is their problem is, then they wouldn't necessarily buy a drill to do it. And so you have to think about that from the point of view of a university. J.D. Mosley-Matchett (10:21) Yeah. Brian Moynihan (10:27) What are we trying to achieve? We're trying to educate these other people. ⁓ And what other ways might we do it? So if we're thinking about a totally radically new environment where everything shifts because of AI, because of market changes, because of demographics, whatever's changing in the world, ⁓ and maybe new competitors, like online ⁓ programs and things like that, or people just learning from themselves on YouTube, whatever the alternative might be, you have to think from the beginning. "Why are we doing what we're doing? What makes us distinctive?" And again, I think that's a sort of question where ⁓ maybe you have a mission statement and you can just read it off the website and it's a sentence. But if people don't feel it, if they don't live and breathe it, then it won't be there. So you begin with that mission. And then I borrowed this concept from finance, which is called zero-based costing. So if you had, for instance, a ⁓ spreadsheet of everything you're supposed to do for your budget from last year and then change it for next year, the coward's way of going forward is just add 3 % to everything and ship it right here. And that's the wrong way doing it. It's not strategic. A better way of doing it is to rethink what you need and don't need this year compared to last year. so ⁓ The one way of doing that is zero-based costing. Everything begins with zero. And then you start adding things back one at a time. Most important things first, and you keep adding them back. So to me, that concept is you begin with your mission, and then you start with AI, J.D. Mosley-Matchett (11:26) Yeah Brian Moynihan (11:50) and then you add things back one after another. Right? And so that, that P is People, Processes, and Plan. So, you might find that the people you have need to be retrained. mean, I, I, so thinking Again, I don't think an all or nothing vision of AI and jobs is like this AI is going to steal people's jobs. I don't think that usually works because people's jobs are pretty complex. So for instance, Ethan Mollick, who many people who are listening to this may, may follow, he's amazing. I'm just reading an article with him today and they were talking about there were a lot of predictions, for instance, that radiologists would go away because, you know, AI is getting really good at reading x-rays, but they haven't. In fact, I think maybe there's even more radiologists than before. But even if that aspect of what a radiologist does, which is read the x-ray, it was gone, there would still be so many other things they have to do to integrate into the healthcare system, to the processes and everything else, but there's still a role for that. And I think people out there who might be afraid of their own role... J.D. Mosley-Matchett (12:21) Yeah. Brian Moynihan (12:44) maybe will take some solace in that. Basically, any job description is going to be made up of a series of tasks, some of which are automatable, some of which are not, some of which are partially automatable. And then I think that we're definitely going to see a shift in where we're going. So I think that has to be a new mindset. But to come back to your original question of, if you're just getting started in AI, where do you begin? I think it begins with a cultural transition and move forward, technical one, in the sense of we start with a mission. We begin. Run a listening tour, go and find out who at your organization is using AI because I'm sure they're using in their daily lives, many people and maybe in their jobs. And they may not want to tell you, but the doing informally, you know, pretending that it's not happening isn't, isn't great ⁓ because you might have some things that are not so great, but finding some way of taking those sort of informal things, hopefully not in a punishment punitive way, but bringing it forward. That's a big part. You want, you want to build a mindset. So like Connor Grennan, who's another person I really like and a good person following the space, talks about if you get a treadmill, it doesn't make you healthy. You know, the treadmill itself being in your room or whatever, it doesn't help. It helps when you get on it and you use it. And that's, that's part of AI, right? And so, so you need to start building that. So you need to build in some literacy around it. You start with some small wins and you build up from there. But I've heard a lot of people say you're not really behind in the sense that like the technology keeps like growing by leaps and bounds. So... J.D. Mosley-Matchett (13:46) Bye. Thanks. Brian Moynihan (14:08) ⁓ As we talk, OpenAI in October of 2025 has released Agent Builder and things like that that make it easier than ever to build agentic workflows. ⁓ And really somebody who doesn't have a long background in that could do it with a little drag and drop, embed it, make it look good with a widget, optimize their prompt, put in new data sources, RAG and vector storage... these things that don't know if everybody knows what all these words are, but They're getting easier and easier to do. J.D. Mosley-Matchett (14:24) Mm-hmm. Brian Moynihan (14:36) ⁓ So you're not too far behind, but I do think it's a cultural transition that needs to happen and it's a mindset transition. And that when you start bringing up AI, it's going to bring up what your problems in your operations is going to bring up, know, missed aspects of your mission, you know, it'll bring things. But as part of this transformation, which is like so, know, sociocultural, I suppose, within every organization, every time you bring in a new technology, it's going to shift the way that people are. And so change management is going to be big part of that as well. J.D. Mosley-Matchett (14:51) Okay. I love it. Okay, you've developed frameworks and innovation networks that are designed to guide AI adoption. But how do these frameworks help institutions move from theory to sustainable mission aligned practice? Brian Moynihan (15:23) Thanks. As I was saying before, The SHOT is one of the frameworks I have. It's Strategy, Humans, Operations, and Tactics. So you can think about that as Strategy, Operations, and Tactics are like three levels of the same thing, right, with Humans all around it. So strategy is the big picture. Why do we do what we do, the mission? What is the landscape of what's happening? So in higher ed, we might think of the competitors, what are the demographics, what are the shifts in the marketplace, as you might say. And then humans are all about this human element. It's really important that we think of the humans when we were part of this process. Who's affected? How do they feel about it? How do we align them? Where are they seeing strengths, opportunities, weaknesses, threats, know, the classic model. ⁓ And then also in operations is thinking across teams. How do we use it across teams? And then tactics is more like how does an individual use it to get what they need to be done? So that's a little bit more like using the tool, getting time on task. Maybe there's some prompt engineering tricks that can help them a little bit. So when I say this, what I like about the model is oftentimes when people talk about AI, you start like, they're only talking about operations. They're only talking about tactics or, know, and so for me, what I like about the model is that you're thinking a little bit about strategy, a little bit about human rights, a little bit about operations and tactics. And you're trying to then with each of the insights you get from one of them, you end up bumping to another one and you start to align on those ideas and it starts helping you build a plan around it. So there's other aspects that, that I would layer in, ⁓ like innovation networks that I was talking about. ⁓ that, know, Something that comes top down. The people at the top are like, "Okay, everybody go do some AI now," which is sometimes you see these CEOs doing. ⁓ That doesn't work that well. ⁓ and, ⁓ the, And the bottom up, which is like everybody like trying to find people in other parts of the organization, doing something and getting some together, but doesn't really have top, top down support. That's tough too. I mean, AI is actually kind of a classic because, you know, the number of, I was reading something that like the number of companies that are really using AI for operations is like one in 10. But the number of ⁓ people who are using it for their consumer use is like four in 10. So like the idea that the bottom up that it's kind of like "bring your own AI!" is like, is happening. It's a real thing. And so the question is, But they both have real problems. Top down, it's like, you have the mission, you have the money, you can tell people what to do, but it's also like this sense of hubris. J.D. Mosley-Matchett (17:33) Mm-hmm. ⁓ Brian Moynihan (17:47) You don't understand everything that's happening, all the tensions with your people. And you don't understand necessarily what's happening with your people you serve, your customers, clients, students. ⁓ But their people on the bottom do understand that, but they don't have necessarily aligned with the mission. They don't have the money. They can't tell people what to do. And so when you can find this middle ground where, you know, sponsored by the top, aligned with the mission, with funding and support to get these things done. J.D. Mosley-Matchett (18:05) Good. Brian Moynihan (18:14) And the ideas and energy for their coming up from the bottom or they see intention within their job, within how they get best served things to combine those two things together and have like, you know, a bridge, which could be, for instance, programs to pull that together or, you know, people submit their ideas for what they could do and pull that together. I'm finding ways of training people that comes out of that center and then reporting back up. So, and when I say innovation networks, I think that like, J.D. Mosley-Matchett (18:37) Yeah. Brian Moynihan (18:43) The classic work chart, as we think about it, tends to be this pyramid, where there's the top and the bottom and there's pyramid. The way I like to think about it is, if you think about all those different people in those places and then other resources you have, maybe money from different places, et cetera, you could think those as nodes in a system. And so if you have one set of nodes and they're sparsely connected, you're going to have one set of outcomes. If have another set of nodes that are really densely connected, you're going have another set of outcomes. And generally, the densely connected ones work a lot better. J.D. Mosley-Matchett (18:47) Yeah. Mm-hmm. Yeah. Brian Moynihan (19:10) And if you're thinking about where you're going to invest, you don't just randomly pop something somewhere. You think, OK, this is our network. How do we best connect them? How do we add to that? And so I think roles that I have played and will play in the future ⁓ have to do with that connector role, which at a university can often be seen like, well, this is bloat, right? This is not a faculty member teaching a student. Why is this person being paid to do this? And so I think that people in this position, such as myself, J.D. Mosley-Matchett (19:32) Yes. ⁓ boy. Brian Moynihan (19:40) really need to say that connector makes the best use. There's basically waste. If these things aren't connected and working in alignment, then there's huge waste. So we need to show that there's like really actually phenomenal return on investment that can come between connecting them wisely and thinking about where to invest on top. J.D. Mosley-Matchett (19:45) Mm. Okay, That makes a lot of sense. Higher education leaders are tasked with balancing efficiency with the human side of education. So how can AI be used to augment staff capacity without losing the trust, empathy, and credibility that students expect? Brian Moynihan (20:17) Yeah, definitely we want AI to free people to be more human. So just about everybody's job has some elements of it that are just drudgery, right? Copying and pasting from here to there, sending out six emails that say the same thing or whatever it is, or even just coming up with the first draft of something. Oftentimes, if it's a standard thing, just drop it in there. The AI can take you, Just lessen some of that cognitive load to get the things done. So I think you want to automate those routine things and then amplify the relationship. So ideally, you're freeing up some time and energy ⁓ that is being done from something about automation. ⁓ And then that you can focus more on the human side of the relationship. ⁓ And I think you should be transparent about it. So if you want to want to trust you should you should be clear with people where and when you're using AI. So for instance, when I wrote this book, AI Culture Shift, with Adnan, we have a chapter in there that is how we wrote this book. We're explicitly saying, you know, we had a series of conversations, we used AI for the transcripts, ⁓ we fed it back in an iterative conversation with ChatGPT and Claude and other tools, ⁓ thinking about like, what might a chapter outline look like given that we talked about, we came up with beats for that, you know, we went through, we iterated over and over and over, we interviewed 13 different people, ⁓ but in each part of the process, J.D. Mosley-Matchett (21:35) Mm-hmm. Brian Moynihan (21:40) AI played a role and I wanted to be clear about that. But especially in that book, I saw it as a mosaic and I was trying to be clear about the way that we're using it. So transparency is key. And the other one, as I mentioned kind of before, which is about training. Training is going to be really key. You can't just give somebody an AI tool or any technology tool and expect it to work out. So I think the training about thinking about this is how we automate, is the human component that we need to add to it. J.D. Mosley-Matchett (21:48) Yeah. Brian Moynihan (22:09) to make sure that it's good output and high quality, but also the heart, the emotional part, the connection with the people we serve. So I think, yeah, we definitely want to make our systems faster and our people more present. That would be the goals. J.D. Mosley-Matchett (22:24) I love that, especially that bit about AI freeing people to be more human. I love that idea. Really good. Okay, looking five to 10 years ahead, how might AI reshape the core administrative functions of universities and what new skills will higher education leaders need in order to be able to thrive in that kind of an environment? Brian Moynihan (22:30) Yeah. Mm hmm. I mean, think AI in general, as we think about like, you know, both in good old fashioned, it's more about prediction, right? Talking about thinking about what we can predict. And so we can become more anticipatory about needs. So administrative functions, you can think about forecasting enrollment trends and budget needs and HR and then the sorts of things that are going to be coming up. So if the if we can sort of get ahead of things in that sense and start to predict where things might be going, I think that would be really good. We're definitely see more interconnection of data, which we've talked about before. So if you've got the directory, if you've got these administrative systems, you've got the grades and wherever all the different tech systems that are being found in the university, being able to combine them becomes really helpful. So if you have a chat bot and the student asks a question, they want to know who's this student, what classes are they in, where in the database does this make sense for them, and combining that together to an answer. And same thing for people who, know, staff, faculty, anyone who's sort of interacting. And so I think it is, the new skills are a little bit more about polyintelligence, like to think about strategy, to think about data, empathy. I think that there's, people have been bullish on the humanities. I came from a humanities background originally, but also have background in business and tech. But I think where those three things come together to me has been really critical. And I'll see that, I think we'll see that a lot. It's, The technology becomes part of leadership, but you don't need to, you know, learn how to code and to be able to do all these things yourself, but you need to be able to think of where the technology fits into your mission and your organization and how that works with people. ⁓ And it will be a time inevitably, as we've already seen of like more or less near constant change. So I think those human elements of, of change management, thinking about AI literacy, data informed decision-making, ethical governance. J.D. Mosley-Matchett (24:37) You Yes. Brian Moynihan (24:44) Those things are all going to be important. Then some of that just comes out to the straight old fashioned storytelling, you know, to be like, okay. And to me, I spend a lot of time about metaphor. Like what's our metaphor here? Our frameworks, the SHOT framework is like, it's a shorthand for being like, okay, we're going to lay the groundwork here and then we can start building that out. One of the metaphors I actually like is like a coloring book. It's like a framework is like. J.D. Mosley-Matchett (24:57) Hope. Ooh, okay. Brian Moynihan (25:10) You're not starting with a blank page. can color whatever you want. But there's some lines there that kind of give you some guidelines. OK, this is where we're going. It gives you a little bit of structure to start working in. So you could use whatever framework you're using to think with as those things and metaphors for understanding AI. And our metaphor for understanding AI is shifting, right? So is AI more ⁓ like a junior ⁓ person, like a teaching assistant, somebody who comes to work for you? They're smart, but they don't know your work. J.D. Mosley-Matchett (25:35) you Brian Moynihan (25:38) Is AI more like a concierge? You come to them and it kind of brings you what you need. ⁓ And so when you think about these different metaphors, each one of them will bring out different aspects of what we're doing. So, if for your particular organization, for the people you serve and your context, what are those metaphors that sort of resonate with people and then really keep them focused on the bottom line? If they only remember one thing about what they're trying to do, sometimes that metaphor ⁓ can get it across really well. J.D. Mosley-Matchett (25:38) Thank Hmm. That's really smart. Yes. Okay, well, thank you, Brian, for this excellent overview of the ways in which AI is changing higher education functions. And I do want to remind our audience that your book, AI Culture Shift, is a great resource for higher education administrators who may face challenges while integrating this new technology. Brian Moynihan (26:08) Yeah. Yeah, thank you. J.D. Mosley-Matchett (26:31) For more information about AI news and trends that are directly impacting administrators in higher education, please follow InforMaven on LinkedIn and visit our website at InforMaven.AI . Okay, are there any parting words or anything that you might want to add or that you felt that you didn't say along the way? Brian Moynihan (26:57) No, was loving it. The only thing is it was all about me. I'm curious. I didn't learn much about you. Yeah. But I'm curious. I'm curious of your background and what you bring to it or your thoughts on it. Yeah. J.D. Mosley-Matchett (27:01) it's all about you. This is your interview. ⁓ great. Yeah. Well, I have more than 30 years of experience in higher education. ⁓ My last position was director of university accreditation and assessment systems management at UNC Charlotte. And before that I was the VP of academic affairs at the University College of Cayman Islands. Brian Moynihan (27:30) ⁓ wow! Cayman Islands! Nice! J.D. Mosley-Matchett (27:30) Anyway, I've been in education a long time and yeah, yeah, yeah, trying desperately to get back. the reason I got involved in AI in the first place, well, besides the fact that it was exciting and new and everything, I got the bright idea in my dotage. wait, let me stop the recording. Brian Moynihan (27:56) Ever could stop recording.