Crafting AI Policies for Higher Education Right Now
In this episode of FYI we discuss the imperative for AI governance in higher education.
Subscribe for Updates
Don’t miss a single episode—subscribe today for the latest content!
Who is David Hatami?
David is an educational consultant who leverages 25+ years of experience in higher education administration, artificial intelligence, teaching, and curriculum development, providing effective and innovative solutions for online learning and education management.
In this Episode
David Hatami, founder and Managing Director of EduPolicy.ai, joins FYI host Gil Rogers to talk about AI policy in higher education. David has over 25 years in the field specializing in AI ethics. David stresses the importance of establishing comprehensive AI policies and ethics programs within educational institutions now, and not leave the issue for later. He addresses the complexities and challenges involved and where gray areas might lie.
David highlights the necessity for dynamic, adaptable policies that address usage by both students and staff. He also discusses his consultancy work helping colleges navigate these issues and the critical need for clear, cohesive guidelines to ensure responsible AI usage. Tune in for insights on how educational institutions can prepare for the rapid integration of AI technology today.
Listen to FYI on your favorite podcast platform!
Episode Transcript
Crafting AI Policies for Higher Education Right Now with David Hatami
Publishing Date: October 8, 2024
[00:00:00] Gil: Welcome back to FYI, the For Your Institution Podcast, presented by Mongoose. I’m your host, Gil Rogers. And today, I speak with David Hatami of EduPolicy.ai. We talk about the ethics and policies surrounding AI practices at higher ed institutions. This is important for everyone, from administrators, all the way down to field staff. Let’s listen in!
Hi, David. How are you?
[00:00:30] David: Hey, Gil, good morning. It’s good to be here.
[00:00:33] Gil: Yeah, how is life today?
[00:00:35] David: We have a tropical storm here today, so it’s been very wet for the last two days, but that’s okay. In Florida, we figure these things out. I’ve been through worse storms, so it’s not too terrible.
[00:00:45] Gil: Yeah, I was going to say tropical storm. You better not live in Kansas because that would be even worse.
[00:00:49] David: Exactly, exactly. It had to be up in the sky somewhere, swirling around.
[00:00:54] Gil: Yeah. I live in Maine in the past couple of weeks. It’s been, like, brutal, like, low 90s and high humidity. So, you get the double whammy. It’s like living in Florida, right? So, it’s a good year to have built a pool in the yard, I guess, for me. So…
[00:01:09] David: As you know, I lived in Maine. About five years ago, we actually moved back. So, I actually remember there was one summer, it was, like, 4th of July and it hit, like, 100 degrees and all the Mainers had no idea what to do with themselves.
[00:01:20] Gil: No idea.
[00:01:20] David: They were, like, totally lost. There’s no AC. Nobody has AC. It’s all, like, wind units.
[00:01:26] Gil: Well, I mean, it’s like when it gets cold in Florida, right? When there’s a certain… I remember I was in the Boy Scouts way back in the day, and they have a camp that’s in the Florida Keys for the Florida sea base and, like, a national training thing down there. And I lived in Connecticut at the time. And we go down for this thing and, like, it was early March, I think. And it was, like, 60 degrees. And I was walking around in flip-flops and shorts and everything. It’s March in Connecticut. That’s, like, a nice day. That’s, like, spring is finally sprung. But you have all the people who are from the Florida Keys, like, “Oh, it is cold.” 60, 65 degrees and overcast and brrr, brrr, so…
[00:02:05] David: I deal with that in my house. I’m the only one who’s still in flip-flop, shorts, and t-shirt. Everyone else is bundled up in my house. So, I completely understand.
[00:02:13] Gil: You’re like my uncle. You’re the guy, no matter what the weather you’re wearing shorts, doesn’t matter. It could be sunny and hot. It could be cold. Always wearing shorts. So, shout-out to uncle Steve down in Connecticut.
[00:02:22] David: There you go. I was, like, I think it’s because I’m from cold country. I think it just never left.
[00:02:26] Gil: That’s it. Oh, so we are here to talk about AI policy. And I know it’s a topic that you are pretty passionate about and knowledgeable about. I’ve been following you on LinkedIn now for a while. There’s, there are a lot of ramifications when it comes to ethics of AI and use of AI in academia. And you seem to be something of an expert on this topic. And so, I appreciate you, you spending some time.
Before we hop in and have a conversation, for our folks who might not know you or may not have heard from you in the past, we’ll put a link to your LinkedIn in the episode notes for our listeners. So, that’ll be there as a resource. So, click that link, get the follow going. But would love for you to just, you know, introduce yourself. Where are you from? Obviously, where you live. How you got to where you are, how you got interested and focused on this topic, and then we can just go from there.
[00:03:18] David: Thank you so much, Gil. I really appreciate that, you know. I also want to thank you for the opportunity to be on the podcast. Always appreciate that.
My name is David Hatami. I’m a dean by profession. I’ve been in higher education now for probably about 25 years. I’m finishing up my doctorate in higher ed leadership and administration. I should be done this fall, done with chapter four, working on chapter five. So, I’ll be a doctor soon enough.
I’ve been following AI for a long time now, for years and years, and it, sort of, wasn’t until maybe about last November that I really, kind of, had an idea of some directions of where to take this. My 14-year-old son came home one day from school – this was last November. He looked me in the eye. He had a huge smile on his face. I mean, it was, like, beaming from ear to ear, his eyes were lit up. And I instantly knew that something was wrong. Instantly. He was way too happy.
And I asked him what was going on, and he told me, he goes, “Dad, guess what?” What? Don’t keep me in suspense. “Because I never have to do homework again.” And I was like, “Oh, I know. I know what you learned about today.” And I said, “Did your friends teach you about ChatGPT?” And he just smiled, and I knew it.
So, we had this conversation. Obviously, it’s a tool, it’s not a cheat. But what it did is it really got me thinking about this process. You know,it’s a true story. And it made me, sort of, think about who’s talking to students. Who’s talking to faculty? Who’s talking to administrators about this stuff? And this is right about the point where ChatGPT hit that one-year anniversary and everybody, sort of, recognized, like, “Oh, this isn’t going anywhere. This isn’t, like, a fad. This isn’t, like, some, like, passing tech that’s going to go away and fizzle out. Like, it’s here to stay.”
And I realized, when I was following my LinkedIn feeds, you know, everybody’s talking about the same thing — students using GPT in the classroom and, you know, and using it as a cheat, not a tool. But I instantly realized that they were missing something, that we were putting, kind of, the cart before the horse, if you will, that nobody was talking about policy. Nobody was talking about AI policy. Nobody was talking about AI ethics. And that conversation has shifted into a little bit more into the narrative now. And it’s a little bit more relevant now, but, you know, what, six, eight months later.
So, what I started doing is I started recognizing that this was a predicate. It had to happen first. So, I started thinking about this problem and how we could put it in context in higher education in K-12, which is why I ultimately decided to open a consultancy, EduPolicy.ai, to address this situation in higher ed.
And it, kind of, started, I was speaking with some of my colleagues. I just started showing them some of my ideas. And everybody’s like, “No, this is good. You need to follow up on this.” And I was being referred from one person to the next, to the next, to the next. And everybody had the same concern. So, that’s, kind of, how it all started. And then the rest, as I say, is history.
[00:06:13] Gil: Yeah, and I think one of the things that’s of note and I’ve heard it said multiple times is, as a society with social media, we failed our children in many respects, right? We, kind of, let all of this get out of control, the algorithms, the advertising platforms, the mental health.
[00:06:30] David: Cyberbullying, yeah.
[00:06:31] Gil: And now, we have a chance… I don’t want to call it a mulligan, but I’ll call it a mulligan. We have a chance with AI to do it right this time. And I think lessons learned over the past 20-plus years of social media, some of them can be applied here when it comes to proper use, proper ethics, because read the articles all the time. We have concerns about students using AI for cheating, for plagiarism, accuracy, and just learning, right? Like, “Oh, I can just type in a question, copy and paste, don’t learn anything, right?” There’s that side of it.
And then there’s the administrative side of it, too. I don’t hear as much about guidance counselors writing letters of recommendation for students, or guidance counselors reading, or admissions officers reading and summarizing essays. These are all efficiency boosters, right? And so, I think there’s a certain, well, if it’s good enough for the goose, it’s good enough for the gander type of a conversation with institutions have a responsibility with how we use AI in our everyday work and the expectations we place on students.
And so, it’s an interesting, kind of, conversation to have. I’d love, and I know you host workshops and you have content out there, just, some of your thoughts on, kind of, for institutions, where should they get started in having these conversations? Who should be involved? And what’s, kind of, a high level for the process when it comes to honing in on the ethics and policies that they should be thinking about with respect to AI?
[00:07:50] David: Yeah, everything you said is absolutely right. This is the chance to do it right. And I think, actually, the societal implications are actually a whole lot bigger this time. I think there’s actually more at stake, because, in essence, we’re actually training generation on how to use this technology correctly and appropriately.
I do believe that the responsibility falls squarely on administrators and institutions. And I believe that, while, typically, higher ed likes to move slower than molasses in a main winter, as I like to say, I think, in this particular instance, they really understand. I think there is a widespread understanding that this isn’t something that can wait. It can’t be put off to next year or, you know, let’s table it and do a committee next semester. I think there is a genuine recognition of the value and the timing of managing this problem.
I mean, let me give you just a crazy hypothetical here, right? You have a student that writes a paper with ChatGPT, then you have a faculty member who grades that paper with ChatGPT. If that’s the case, then why are we here? What are we doing, right? It doesn’t make any sense.
And so, I know that there has been this, sort of, shift to administrators saying, “Okay, I don’t know what to do. So, I’m going to let the faculty decide. It’s up to the faculty.” And, you know, I published a Substack a couple of months ago, and I basically said, now we have two camps. We have Camp GPT All Day and we have Camp GPT No Way. And what happens is, and this pertains directly to admissions, and I think you’ll appreciate this, is the students get stuck in the middle. And when students get stuck in the middle, when students are confused about the institutional policies that govern the classes that they’re taking, while the faculty do have the right to academic freedom, I think, anybody in higher ed holds that really sacred. I think what’s happening here is, because we don’t have a cohesive understanding or adaptation of AI usage in higher ed, it’s creating this rift and dichotomy. And my hypothesis is a student that is confused and isn’t sure, they’re going to vote with their feet and they’re going to withdraw. They’re going to find another institution. They’re going to cut back on their classes.
And as an industry, we work too hard to get students in those seats right now. And I don’t think an institution can afford to be confusing their student stakeholders over the lack of a policy. And so, this is why I think the responsibility falls squarely on administration and an institution, but it’s also on the responsibility of the faculty. And here’s why. A student needs a cohesive understanding of what’s okay and what’s not okay. Faculty need to understand what’s okay and what’s not okay. And staff, admissions, needs to know what’s okay, what’s not okay.
And anytime they’re putting… remember, we still have to worry about FERPA. We still have to worry about Title IX. We have ADA. I mean, there’s all kinds of regulatory issues that higher ed needs to be concerned about. And if we have an employee, whether at home or at a work computer, plugging proprietary information into an open source LLM, that’s a problem. That is a security breach. I mean, the case studies go on and on. We’re risking personal student data being leaked. So, I mean, there’s a lot of problems with it.
But the irony is, is, I think, we need to embrace this technology, but we need to do it responsibly. We need to make sure that we have a set of rules that everybody understands, that everybody can point to it and say, “This is okay, and this isn’t okay.” Everybody has to be on the same playing field. We have it in society. We’re driving down the interstate. We know what the rules are. We know what the speed limit is. We know where we can turn left. We know where we can turn right. We know we’re supposed to use our signals when we change lanes. Everybody has a consistent set of rules to make it much easier for everybody, and I don’t think our AI usage should be any different.
[00:11:58] Cadence Ad: Grow your student community, help them stay, and encourage giving with Cadence, higher ed’s premier engagement platform from Mongoose. Designed exclusively for higher ed by higher ed professionals, Cadence helps you engage your audiences with the perfect balance of AI and personal connection. Talk to students, parents, and alumni on their time and how they want. Empower your staff with integrated text and chat inboxes that gather all conversations in one place.
Reach out to learn more about how our best-in-class service, support, and integrations have helped colleges and universities like yours have smarter conversations. From text to chat, make every message count.
[00:12:42] Gil: Right, backing up to what you mentioned before about the rapid rate of adoption and understanding of the urgency, I’ve been working in enrollment marketing for a very long time and I’m still frustrated at how there are still college websites that are not mobile-responsive, let alone, easy to navigate, right? And there are some institutions that really never figured out Facebook, right? And so, you go through these phases of all of these different technological changes, CRM implementations, right? There are still institutions using access databases for these things, right? And so, that might be a little hyperbolic but not…
[00:13:17] David: Not really.
[00:13:18] Gil: Right. And so, but with AI, the original big concern was, “Oh, AI is going to come and take everybody’s jobs,” right? “And it’s going to automate everything. And we’re all going to lose our jobs.” And I think we moved past that really quickly. And at least, in the enrollment world, and I’m sure it’s been similar in other areas within institution, but we moved that and we moved past that into, “Okay, well, I need to understand how to do this to do my job better and do it faster because I don’t…” we don’t need to worry about AI replacing people’s jobs and we have a shortage of staff, right? And so, if nothing else, we need to use AI to make the staff that we do have more efficient and more effective.
And so, I feel like there’s a certain mindset… there’s a more rapid mindset shift when it comes to AI than, as you mentioned, anything else before, where it’s, “Ah, we can figure out Facebook next year. Ah, we can… our CRM will live with it, right? Like, oh,we’ll fix that next time, right?” We got to get this right out of the gate because there are a lot more loss when it comes to the impact that this can have across all different channels, right?
And so, I guess, one, I’d love your thoughts on that, but two, I’d love for you to talk a little bit of more tactically for institutions, what are some of the things that they should, like, just low-lying fruit, quick wins when it comes to this sort of work?
[00:14:36] David: Everybody needs to have a voice. The students need to have a voice in this process. Faculty need to have a voice in this process. Administrators need to have a voice in this process. Because if you don’t have institutional wide buy-in, you’re not going to have any buy-in. I think that the universities need to have two sets of policies — an overreaching policy, like a general university policy, but then each department needs their own sub-policy. And that’s for two reasons. One, you want to automate what you can automate and streamline what you can streamline. Use the technology to your advantage to increase the efficacy of each department individually. And then, also, the information that each department have access to is different. Admissions has access to a different set of information than, say, academics or even IT.
And so, there’s… what that means is there has to be different guardrails in place for the usage of that information in admissions, of that information in academics and IT. So, being able to have the guardrails for each of those independent policies, but also making sure each department has their own. And I know it seems crazy, right? You’re like, “Well, this just affects academics,” right? No, that’s just literally the tip of the iceberg way on top. This affects every single department within an institution. This affects every vertical in our market. I mean, we’re just talking about the higher education vertical right here, but this is a horizontal. This goes in every direction, in every industry, on every level. And that’s why this has so much more impact than social media is ever going to have. But we don’t necessarily see it yet, but I’m telling you, it’s there.
[00:16:24] Gil: Yeah, I think, there’s a lot to tackle and a lot to unpack. And this is a cheap plug for Mongoose because they’re the other sponsor of the podcast, but, you know, one of the things I know they do when it comes to text messaging policy, right, SMS rules and regulations, there’s a lot to do. And admissions knows and development knows and student affairs knows they need to be using texting to engage with students. But what person on the staff is going to be responsible for following the laws and the regulations around the use of text messaging and broadcast text messaging and all those sorts of things, right?
And so, what’s great is companies like Mongoose help with the managing of that process and opt-ins and making sure that it’s clear, these are the right things you need to be doing, these are length of time you can store data, all these sorts of things that, a lot of times, institutions are like, “What are you talking…” and then they can get in a lot of trouble and be open to a lot of problems. And so, it seems like there’s a fit here before AI and policy for something similar, right?
And I know you’ve done some work in this area. I’d love for you to, kind of, share a little bit about how institutions can leverage that work to best put themselves in a secure position.
[00:17:35] David: Yeah, no, there’s two different things that they can do, and these are my areas of specialty. One is to institute a policy that, in other words, define the rules. I’ve seen institutions, when I’ve asked to see their policy, they showed me a piece of paper. And it said, basically, “We let the faculty decide how to use generative AI in the classroom.” And they call that a policy. And I very respectfully, you know, offer some other thoughts on that.
But then the other piece is to implement an institutional-wide AI ethics program, which is what I also am able to do. In fact, I have a community college here in Tampa that is doing just that. So, we’re going through that process, because what they recognize is that this is a systemic issue. And not only do you have to advise them that we’re going to put a policy together to be an ethical campus, to be a safe campus in terms of safeguarding our IT and our regulatory information, let’s train every single student to understand the significance of why this is important and why this matters. So, let’s train them right from the beginning so that everybody’s on the same page.
And not only does this work to help eliminate a lot of those classroom problems where teachers are, you know, either failing a student or there’s escalated concerns to the dean’s office about the use of AI. There’s also a level of institutional risk mitigation. Because if you think about it like this, if you put a student on notice by sending them through an AI ethics course that they sign at the end and they have a certificate of completion, and then if there’s an issue, you go back and you point to that e-signature and you say, “So, we had this conversation. You understand what the rules are, so you can’t tell me you don’t know what the rules are.”
And the list goes on and on in terms of what happens if you don’t have a policy. And I’ve done an entire webinars on just that topic alone, and I’ve done entire webinars on what happens when you do institute a policy, because it’s that thorough and it’s that deep. So, I hope that answers your question, Gil.
[00:19:38] Gil: Yeah. I think the biggest thing for most institutions, I think, right now, I gave a presentation at a conference a little bit ago around the technology adoption curve. And for people who aren’t familiar with the technology adoption curve.
[00:19:50] David: And the bell curve.
[00:19:51] Gil: It’s the bell curve and there’s got the early adopters. You get the innovators, the early adopters. Then you got the market, right, things that cross the chasm, right? And I think AI is really, still, from a user’s perspective, in that innovators and early adopters phase. But it’s going to leapfrog over that chasm pretty dang quickly, and institutions need to be ready for it, right?
[00:20:13] Cadence Ad: Discover future applicants, delight enrolled students, and amplify fundraising performance with our Cadence Engagement Platform’s live chat and chatbot solutions. Designed exclusively for higher ed by higher ed professionals, Cadence helps you engage your audiences with the perfect balance of AI and personal connection.
We leverage proactive outreach and anticipate common roadblocks, knowing the most significant decisions often start with the smallest conversations. Our powerful AI ensures instant support and is smart enough to know exactly when to hand off to a staff member. And if nobody is available, it allows for easy follow-up.
Effortlessly integrated with your website, we proudly feature an industry-leading 85% self-service rate. It’s never been easier to make every message count.
[00:21:06] Gil: I feel like this is where that rate of pace in higher ed and the rate of growth of technology always have this, like, little, butting heads environment. This is the one, as we mentioned earlier, that I feel like higher ed’s being pulled along more quickly than anything else because they have to be. But for a lot of folks, especially I think about, like, marketing admissions recruitment, they’re, kind of, stuck in that early adopter innovator phase, where it’s like, “Oh, my CRM has a button that I can click that says, rewrite this email.” That’s my use of AI, right? Or, “CRM has a drafty response for me. Go ahead, draft a response.”
There’s a lot more, but those things are going to fall under this institution policy, the use of these things for institutional policy. So, I guess, for someone with your level of expertise to someone who’s, kind of, stalled in that early adopter, just playing with toys phase versus something that’s integral and integrated to their work, what’s that process going to look like, and how are people going to best make that transition?
[00:22:09] David: The administration needs to take a stand and make a decision. I think that’s the first thing that needs to happen. I think there needs to be campus-wide conversations about the topic. Everybody needs to have a voice in this process, and there needs to be this recognition that a policy is going to be a lot more intricate than you probably expect it to be.
And the one thing that I hear a lot is, “Can’t we just tuck this under the honor code? It’s like we already have an academic integrity code.” And, well, technically, probably, yes. I would argue that this has so much specificity to it that you probably want to define what those rules are, because is it academically dishonest if a faculty member uses GPT to grade the paper?
So, you know, it’s all the little things that you didn’t necessarily think of. And one thing that I say a lot is we still don’t know what we don’t know. And it seems ironic, but it’s true, because as the technologies evolve quickly, we’re going to come up with scenarios that we didn’t even know were going to be an issue. Incidents are going to come up that we’re like, “Wow, like, just when I thought…” I remember one thing I’d say as a dean all the time is, “Just when I thought I’d seen everything, I’m dealing with this, whatever this is.”
And so, I think AI is going to do a lot of that. And then we’re like, “Oh, well, hey, wait, can’t we use AI to do this? Or maybe it can help us with that.” And it’s not trying to bog our lives down in bureaucracy and rules. And I think that’s part of the issue, is it still has to be nimble and fluid and there has to be the ability to change quickly, to pivot on a dime. So, there’s ways to do that, but that’s also part of the process that you have to, kind of, bake into the solution, much like a constitution has the ability to be ratified. You know, you can change the rules as you go – you should be able to – because the last thing we want is our policies to be stuck in a binder that nobody ever sees that’s just collecting dust.
[00:24:03] Gil: Yep. And I think that’s the key, right, is making it understood, accessible, available, but also nimble, right? Because I know that being able to change as things evolve opens up, “Oh, the rule changed and I didn’t know,” right?
[00:24:18] David: Right.
[00:24:18] Gil: And so, there’s got to be an effective method for articulating those changes. The good news is that we’re so, I hate to say wild, wild west, but we are still, kind of, in the wild, wild west at many institutions. And so, this is the opportunity to take a leadership position for a lot of folks. And so, I feel like, again, someone with your expertise, the opportunity is to, kind of, leverage that and say, “Hey, we need some help with the first draft, right? Or we needed some help with that step one.”
So, taking it back, the last question I’ll have is, for institutions… I say it’s the last question I have, but I don’t guarantee.
[00:24:49] David: It’s all good.
[00:24:50] Gil: The rules might change as we go, but for institutions that are looking for help and support with making that first draft, what are some things that you would encourage them to do when it comes to, you know, obviously, getting in touch with you, people like you, but, like, what’s step one?
[00:25:04] David: So, one thing that I hear a lot is people ask me, “Well, what can AI do for me? Or what can AI do for my institution?” And I always have to correct them contextually. And I’ll say, “It’s not about what AI can do for you, it’s, what problem do you want to solve? In other words, what is your problem statement?”
So, the first step that they need to understand is, what is their problem statement? Because the technology is designed to solve problems, and it wants to solve problems. So, you have to give it a problem to solve. So, by defining your problem statement, then, and only then, can you actually start to take steps towards getting a cohesive voice and drafting a policy that is going to make sense.
Because everybody says, “Okay, let’s just go ahead and just put paper to pen and let’s go. You can’t do this, you can’t do this.” No. What is your problem statement? What are you trying to solve? Once you know that, then you have to reverse-engineer the solution, but then you have to talk about it. You have to figure out and get a voice from everybody who’s involved in that process. And for example, when you’re doing your individual policies from a departmental level, that means all of admissions needs to be involved in that. All the DOAs, all the ADOAs need to be involved in this process. Say, “Okay, well, this is what we would like you to do, but these are the limitations of what we’re able to do. So, therefore, this is the rule that we should write.”
And that’s, sort of, kind of, a very macro level of how that process works. I get a lot of, you know, like, well, can I get a sample? Can I get a template that I can just get out, fill in the blank? And truthfully, I would love to just email you a template and say, “Here you go,” but the thing is, it’s going to be different for every institution and every department in every institution needs to look at this individually. Because there are way too many variables to have any sort of standardized language or verbiage that is going to make sense as a one-size-fits-all. Imagine an institution size of 1,500 and small private liberal arts college as opposed to a major university of over 100,000 students, right? You know, some of the basics might be the same, but there’s going to be a lot of fundamental differences.
[00:27:14] Gil: Yeah. and I think that’s one thing folks struggle with, is, like, “I need a template. I need to just be able to plop and drop copy paste,” right? And it’s a little bit more complex than that, which is why I think you get a partner that can help and advise and support, because they’ve got the knowledge base and the knowledge bank of those different scenarios and those different institutions and tailor it to you versus-
[00:27:33] David: That’s right.
[00:27:33] Gil: … a copy-and-paste approach, which I think a lot of the, like, big, I call them the conglomerate big box consulting firms will do, right? It’s like, “Oh, everything just applies,” but it’s more nuanced than that.
[00:27:44] David: I like to consider myself, sort of, a boutique consultancy, in a sense, because it really is very specific and very hands-on. In fact, I’m working with a small community college in Western Maryland right now, and I’m helping them figure out what their policy looks like. So, but it’s a process. And, you know, you have to go through the process. You have to talk to the people. You have to have consultations with the stakeholders, right? Talking to the administrators, talking to the faculty. And then when school starts, in early September, once they get rolling, we’re going to start having conversations with the students before you can actually really etch anything in stone. And that’s just how it needs to be.
[00:28:22] Gil: Yeah, absolutely. So, I appreciate your time and your input and all the work that you do.
[00:28:28] David: My pleasure.
[00:28:28] Gil: I love your… you got to, again, we’ll put links in the episode notes to all your resources, your website, as well as your newsletter. And so, I would love for folks who want to get in touch with you and continue this conversation, what are the best methods to do that?
[00:28:43] David: Honestly, just a simple email. It’s very straightforward. It’s admin, A-D-M-I-N, @edupolicy.ai, admin@edupolicy.ai. Either myself or my business development director will reach back out to you directly. We really encourage you to just let us know. Even if you have a really simple question or a basic question, we’re happy to give you whatever advice that we can. Look at our website, edupolicy.ai. There’s a lot of good stuff on there. We have bespoke AI ethics content for higher ed — in other words, AI ethics courses specifically for administrators, specifically for higher ed faculty and higher ed students. And we also have AI ethics courses for admissions officers as well, for admissions reps as well. So, we have it all. And we’re actually in the process of building more and more of them.
[00:29:33] Gil: Awesome. Great. So, David, appreciate the time. Appreciate you being here. And to our listeners…
[00:29:38] David: Pleasure. Thank you, Gil, for having me.
[00:29:39] Gil: Yeah, absolutely. And to our listeners, we’ll see you next time on FYI.
[00:29:43] David: Thank you again. Goodbye!
[00:29:50] Gil: Hi, everyone. This is Gil with a quick post-episode update. In a couple of weeks, we are going to be wrapping up this season of FYI. At the end of this season, I’ll be taking a step-back from my hosting duties to focus on my core business, as well as my family.
It has been a privilege to support this podcast and be a part of so many amazing conversations. I want to thank Mongoose for the great opportunity to be the host, as well as their continued support of hosting this content on their blog, as well as anywhere you find podcasts.
We hope that you find these conversations to continue to be supportive, as well as constructive to your enrollment outcomes and your needs.
Thank you so much! And we’ll see you next time on FYI.
[00:30:34] Cadence Ad: Thoughtfully nurture applicants, personalize retention efforts, and exceed fundraising goals with our Cadence Engagement Platform’s text messaging solutions. Designed exclusively for higher ed by higher ed professionals, Cadence helps you engage your audiences with the perfect balance of AI and personal connection.
We leverage an intuitively designed interface and easy-to-use texting templates, so you can have targeted conversations or scale up to expand your reach. Our powerful smart messaging can respond automatically — exactly how you would — and to measure progress, track your campaigns with unparalleled reports and analytics.
Effectively meet your community where they are, as we proudly feature an industry-leading 95% read rate within three minutes. It’s never been easier to make every message count.