J.D. Mosley-Matchett (00:00) It's time for another episode of AI Update brought to you by InforMaven. I'm J.D. Mosley-Matchett, the founder and CEO of And our guest today is Dr. Claire Brady, a nationally recognized consultant, speaker, and coach, who helps higher education leaders navigate AI with intention. Drawing on 25 years of campus and executive experience, she guides institutions in using technology to advance student success, strengthen organizational culture, and deepen human connection. Welcome to the podcast, Claire. Dr. Claire Brady (00:34) I'm so thrilled to be here. Thank you so much, JD. J.D. Mosley-Matchett (00:37) I love your company's tagline: "At the intersection of human expertise and AI innovation." Now, how did you come up with that? And is there any special meaning underlying that phrase? Dr. Claire Brady (00:49) It wasn't actually originally our tagline. My company's name is Glass Half Full Consulting, which if you spend more than 10 minutes with me, makes perfect sense. I tend to see the positive opportunities that are available. I'm not a deficit model thinker. But where my work really naturally started to coalesce in the area of AI was this idea that humans are in the driver's seat, right? We're not J.D. Mosley-Matchett (01:13) I don't know. Dr. Claire Brady (01:13) being left behind. Robots are not coming for us in the way that maybe science fiction novels have told us in the past. And so what I kept hearing from my clients was I feel so much more confident. I feel so much more competent after working with you that I feel like I'm in the driver's seat. And so what I really figured out was the work was really at that intersection of embracing the technology, understanding the technology, J.D. Mosley-Matchett (01:31) Yes. Dr. Claire Brady (01:40) fearing the technology a little bit, but that we're in the driver's seat and that humans need to be at the center of this work or it's not worth it. And that's really where that tagline came from. J.D. Mosley-Matchett (01:51) You're absolutely right, Claire. That's great. You've written about being a "now-ist" rather than a futurist when it comes to AI. So what does that mean for higher education leaders who are trying to move beyond pilots into large-scale transformation? Dr. Claire Brady (02:07) It's funny JD, I call myself a "now-ist all the time because I see this constant gap between what's already happening on our campuses and what leaders think might happen someday. Just recently I was with a campus where leaders were still debating whether to allow students to use ChatGPT. Yet our students have been using it for months and in some cases years to draft emails, brainstorm projects, and even plan their course schedules. J.D. Mosley-Matchett (02:20) Mm-mm. Dr. Claire Brady (02:36) The future isn't coming. It isn't some future point. It's here and it's sitting in our students' pockets, right? So when I meet with presidents and provosts and vice presidents, I often hear, should we wait until the technology matures to really dive in? My response is, our students can't wait. Our employers can't wait. The world isn't waiting for higher ed to feel ready or to feel competent. J.D. Mosley-Matchett (02:42) Okay. Dr. Claire Brady (03:02) The risk isn't moving too quickly, it's waiting too long and leaving the majority of our students to figure it out without us. Too many campuses fall into what I call the pilot trap, running the same experiment with 20 or 100 students while 30,000 are out there figuring it out for themselves. That's not innovation, it's performance dressed up as strategy, right? So a "now-ist" approach is about pairing urgency J.D. Mosley-Matchett (03:14) running the same experiment with 20 or 100 students. Mm-hmm. was approached about the area of the project's main intention out. It means asking an AI to produce a misreported task or an error of a staff. Dr. Claire Brady (03:30) with intentionality. It means asking if AI can reduce the administrative tasks burning out our staff, why not scale it now? If I can help personalize learning in ways that keep students engaged, why limit it to just simply a pilot? Higher ed leaders don't need to be futurists. They need to act with courage in the present moment. The institutions that will thrive aren't the ones waiting for the perfect clarity, J.D. Mosley-Matchett (03:43) you Dr. Claire Brady (03:57) but the ones willing to learn in real time and lead with purpose right now. J.D. Mosley-Matchett (04:02) I love that. That is so accurate. In higher education, the work is deeply relational, so maintaining the human touch is really important. Can you describe the most promising AI use cases you've seen that still keep the human connection at the center? Dr. Claire Brady (04:21) Sure. what I love about our field, right? Higher education is about human transformation and no algorithm can ever replace that. One powerful possibility that I see AI is as an early signal, not an answer machine. Imagine a system that reviews early coursework and quietly alerts an advisor that a student might be struggling. The student doesn't get a cold auto email, they get a phone call or a check-in from someone who knows them. J.D. Mosley-Matchett (04:24) Bye. Dr. Claire Brady (04:49) and can offer them real support. The technology makes the space far more caring and much more open and available to human conversation. Another example, an AI-powered chat bot that handles the late night, what time does the library close questions, but then instantly routes anything sensitive like mental health concerns to a human professional. The bot isn't the advisor or the counselor. It's the triage support, making sure the right person shows up at the right time for the student. And of course, I get really excited about the ways that AI can help faculty and staff spend more time on the high impact parts of their work. If a professor can have AI generate practice problems tailored to each student's needs, that frees them up to use class time for the kind of relationship-driven learning that we know works and draws many of us to this work in the first place. J.D. Mosley-Matchett (05:24) Thank you. Dr. Claire Brady (05:45) At its best, AI doesn't replace connection. It protects and it amplifies it. So when we design with intention, AI becomes less about replacing people and more about restoring their capacity to do the human work that matters most to us. J.D. Mosley-Matchett (05:45) Thank Well said. Okay, you've worked with a lot of leadership teams. Are there any cultural or organizational barriers that you've noticed most often get in the way of meaningful AI integration? Dr. Claire Brady (06:06) Mm-hmm. It's such a good question. So many. The biggest barriers though I see in my consulting work aren't technical. They're all cultural. Higher Ed is steeped in traditions of deliberation, consensus, shared governance. J.D. Mosley-Matchett (06:19) Okay. Dr. Claire Brady (06:30) And these are all absolutely strengths of our field. But they also sometimes can create what I call innovation gridlock, right? The risk of making the wrong decision often feels scarier than the risk of making no decision. I've seen leadership teams spend an entire meeting debating whether a chat bot should greet students with hello or with hi, while their students wait days for answers about financial aid or advising. That's not because those leaders don't care. It's because the culture makes it hard to move forward without perfect clarity. So we all struggle with silos. J.D. Mosley-Matchett (06:52) Yeah. Thank you. Dr. Claire Brady (07:06) IT might be building out infrastructure. Faculty might be experimenting with AI in the classroom. And student affairs might be piloting new engagement tools. Yet they're rarely sitting in the same room to design solutions together. The opportunity is to align those efforts around shared student outcomes. And then there's the idea of expertise hoarding. We sometimes assume only computer scientists or external vendors know how to do AI. J.D. Mosley-Matchett (07:16) if they're rarely sitting in the same room, to find solutions together. opportunity is to align those efforts around shared... Dr. Claire Brady (07:34) But I've sat with registrars who know exactly where automation could free up capacity. And I've sat with student affairs leaders who can spot the human implications of AI better than any technologist can. The campuses that thrive are the ones that invite everyone's expertise into the conversation, including students. Meaningful integration comes when we stop treating AI as just another software rollout and start treating it as a leadership and cultural shift. J.D. Mosley-Matchett (07:41) ⁓ yeah. Yes. Dr. Claire Brady (08:02) It also happens when we align around shared outcomes and let AI serve our people, not the other way around. J.D. Mosley-Matchett (08:10) You often emphasize responsible and human-centered AI. So what does responsible AI look like in a student-centered higher education context? Dr. Claire Brady (08:21) For me, this is really personal. As a first-generation college student, I know what it feels like when the system wasn't necessarily designed with my experience in mind. That's why I believe responsible AI has to be about learning and empowerment. It starts with a simple question I always ask leaders. "Would I want this for my own kid? Would I want this for a kid who I care about?" If a predictive model tells us someone might be struggling, the goal shouldn't be to limit their options. It should be to open the doors wider, connect them with a mentor, or proactively provide resources before they fall behind. Responsible AI is also about transparency and trust. Students deserve to know when they're interacting with an AI, and they should understand how it works, where the data comes from, and what safeguards are in place. If we can't explain it in plain language to a first-year student, we probably shouldn't be using it with them. And then there's the all-important digital literacy, AI literacy, whatever you want to call it. It isn't just a tech skill, it's a matter of ethics and values. Not every student arrives on campus with the same comfort level, confidence, or access. Responsible AI means ensuring all students, not just those with the resources, are equipped to use, critique, and create with these tools. So when we design responsible AI systems that are transparent, ethical and student centered. We're not just avoiding harm. We're making higher education more human, more caring and more aligned with our mission and with our values. J.D. Mosley-Matchett (09:57) Let's do some crystal ball gazing. What excites you the most and what concerns you the most about the next phase of AI in higher education? Dr. Claire Brady (09:59) Ooo... company's name is Glass Half Full. So you know that I'm gonna start with my concerns and end with my hopes. What worries me is what I call the automation assumption. This belief that if AI can do something, then it automatically should. Just because AI can grade essays or manage advising conversations doesn't mean that's the best use of the technology or the best way for us to serve students. And yes, the vendor pressure right now is real. Every EdTech company has slapped "AI powered" on their marketing materials. And I see institutions making million dollar, multi-million dollar decisions based on very short demos instead of aligning with their mission. At the same time though, I don't want us to miss out on the many opportunities that are in front of us. So some of the things that excite me, we're finally asking the right questions. Five years ago, we were still debating whether online education was real. A year ago, we were stuck in this conversation about AI tools and academic integrity and cheating issues. And today we're asking, "How can AI make higher ed more personalized, more accessible, and more human-centered?" That shift in mindset is so powerful. I think about what's possible on our campuses. Imagine AI helping design flexible course pathways for working parents and adult learners, or a mentorship matching tool that connects first-year students with alumni who share their career goals, setting the stage for meaningful relationships that boost retention and belonging. Or a multilingual AI tutor that helps students learn in a language that feels most natural to them. These scenarios are more than just tech upgrades. They're about creating environments where students feel seen, supported, and set up to thrive. And that's why, despite the very real challenges, I'm more hopeful about higher education's future today than I ever have been. Far from diminishing our role as educators, AI allows us to focus more deeply on the human connections that matter most in education and the connections that we know help students learn and grow. J.D. Mosley-Matchett (12:20) That's absolutely a glass half full concept. I love it. I love it. Thank you so much for sharing your thoughts about AI and your hopes for its future in higher education. This new technology is truly exciting, but I greatly appreciate your focus on keeping humans central to the use of AI within our colleges and universities. Dr. Claire Brady (12:46) Thank you, JD. What a pleasure it is to spend this time with you and thank you for the work that you're doing. J.D. Mosley-Matchett (12:51) Aww. more information about AI news and trends that are directly impacting administrators in higher education, please follow InformAven on LinkedIn and visit our website at informaven.ai. ⁓