Ready.Set.Midmarket! Podcast: AI Governance For The Midmarket
In this episode, vice president of midsize enterprise services at The Channel Company, Adam Dennison; Samara Lynn, senior editor of MES Computing; Jay Ferro, chief information, technology and product officer at Clario; and FranklinCovey global CIO Blaine Carter, discuss the critical topic of AI governance. They explore its importance in business verticals like healthcare, and how organizations can effectively implement AI governance frameworks no matter the industry.
The conversation emphasizes the need for collaboration across departments, the significance of data governance, and the balance between innovation and compliance. The discussion also focuses on practical steps for midmarket leaders to begin their AI governance journey and highlights the importance of monitoring AI models to ensure compliance and effectiveness.
The full episode can be watched on YouTube and heard on Spotify and Apple Podcasts.
Previous RSM! episodes are here.
Transcript:
Adam Dennison:
Hello, everyone. Welcome to another episode of Ready, Set, Midmarket! Also happens to be Samara and I’s final episode of 2025. So very, very successful year in our podcast launch. I’m Adam Dennison, vice president, Midsize Enterprise Services with the Channel Company. And joining me as always is editorial director of MES Computing, Samara Lynn.
And we have two fabulous guests today to talk about a very interesting topic, AI governance. It might not be the coolest thing to talk about when it comes to AI, but it certainly is as
crucial to the success of your business and your use of AI. I’d like to welcome Jay Ferro. He’s the chief information, technology and product officer with Clario.
Jay Ferro:
Hey, Adam, great to see you. Thanks for having me.
Adam Dennison:
Absolutely. And Blaine Carter, he is a global CIO with FranklinCovey.
Blaine Carter:
Thanks, Adam. Happy to be here.
Adam Dennison:
Thank you so much. And incidentally, both of these gentlemen sit on Samara and I’s MES advisory board as well. So they do an awful lot for us and we appreciate everything that you two bring to the table. Before we get started, why don’t you just introduce yourselves a little bit more, let us know what Clario is. I think a lot of us have heard of FranklinCovey but might not know the FranklinCovey of today.
So Jay, why don’t you just start, kind of let us know a little bit more about Clario, your role, and then we’ll go to you, Blaine, and then we’ll dig into the topic.
Jay Ferro:
Thanks, Adam. Clario is a clinical trial technology company. So we do endpoint data collection. Whenever you are in a clinical trial, generally speaking, you are trying to make sure that the therapeutic, the drug, the molecule, whatever is safe, of course, and effective and improves lives. Our technology helps large pharmaceutical organizations, biotechs, et cetera collect the data that they need, analyze and interpret the data to accelerate both the trial process, but also accelerate decision making. So all the technology that goes into collecting all of that data, whether it’s respiratory, cardiac, medical imaging, behavioral data, et cetera, requires a whole bunch of technology in countries all around the world. We build, also augmented with science and AI, we build all of that technology and deploy it all around.
Adam Dennison:
Perfect, thank you. Blaine?
Blaine Carter:
Yeah, so if you have heard of FranklinCovey, most of you might have know FranklinCovey as the planner people. Obviously, we were known for that back in the nineties. See Jay smiling, maybe it was his first job. He sat down in the room and there was someone that had that planner there. FranklinCovey’s actually evolved from that where we no longer do the planner business, but we do carry those same principles around leadership, trust, and time management into more of a training and business services type of a model.
Our primary revenue source right now is actually an online platform that delivers a lot of our systems around the world. We sell in about 160 countries globally and have been doing that since about 2017. And we’re excited for the AI revolution as it were to hopefully help improve people’s lives, both from a training standpoint as well as from an everyday standpoint. And we can hopefully all avoid the AI overlords eventually.
Adam Dennison:
Yeah, so it’s not your father’s FranklinCovey anymore.
Blaine Carter:
It’s not. A lot of the principles we find are timeless, but the delivery mechanisms have definitely changed over the years.
Adam Dennison:
Yep, absolutely.
Jay Ferro:
Those planners were worth their weight in gold, Blaine. Let me tell you something.
Blaine Carter:
I hear you man, and we still have people come to -- I have family members that come to me and say, hey, can I still get a discount on that planner? And it’s like, you know, that’s going to be real tough for me. If we had a time machine, maybe we could.
Jay Ferro:
I’ll tell you what, I mean, I’m a guy that loves office supplies and all of that. I know that dates me, but whenever you get like a fresh notebook or a fresh frank and cubby planner, man, it is a, you know, the whole world is a possibility when you have this blank thing. You’re like, oh, I’m, I’m going to be unstoppable now. Thank you, Phil. See, Adam knows, Adam knows.
Adam Dennison:
A production of one is not that cheap.
Blaine Carter:
You are unstoppable, You keep that.
Adam Dennison:
I have one page left in my notebook for the year. One. I have to book a couple more meetings early next week. right. So let’s jump into the topic. Look, AI governance, it’s something that many end users probably don’t think about. I’m an end user. I don’t think about it very often. Some may not care about it. How would you define AI governance and what its importance is to your organization and I’ll just throw that question out to both of you to chat through.
Blaine Carter:
I’m happy to go first. So, I think AI governance is naturally an extension of data governance. In my mind, those kind of things go hand in hand and are one kind of leads to the other. If you and your organization didn’t have a great data governance program, chances are AI governance is going to be a very difficult task because really AI is basically just making data accessible in a way that’s much more humanly consumable. And so if you know if you’re maybe a little behind on the data governance side, I think that’s probably a natural starting point. But I think there’s also a few other key points. I think you have to discuss that is around any kind of risk based system, which I think AI is really at the the base level or risk-based system is you know what industry you win. You know, Jay is obviously in health care. There’s going to be a much higher lift on HIPAA health care information in there, but it goes into you know how do you make sure that you’re governing AI in the same way you would with any other business system? Are you doing risk assessments? Do you have clear principles or clear policies around AI usage? What compliance do you have to be part of? And then how are you taking that and developing a training and awareness program for your team and for those in your company that it enables them to do things with AI if it’s relevant without putting the business at risk? And so I think those are some of the big systems there that kind of touch on a governance program there is how do you do that? Not only, you know, it’s a small, you know, maybe pilot group, but how do you do that at scale? And that’s where I think the industry is really changing rapidly. AI, you, you know, week in and week out, there’s s
Adam Dennison:
And Jay, give us that healthcare spin on it too. Give us that healthcare spin too from your perspective.
Jay Ferro:
Yeah, of course. You nailed it in the life sciences space in particular. There’s you know, we are very highly regulated, rightly so. I think the integrity of the clinical trial process and drug discovery people expect it, right. I think appropriately that there are controls, including in the use of AI. You know, I think I’d start in ... Blaine did a really nice job kind of framing it. What AI governance is not is not just an ethics committee. It is not a legal document that sits on a shelf. It’s not about banning AI. Believe it or not, I still hear that every single day from old school CIOs who talk about banning it. I don’t know what it is, so therefore I’m going to ban it. And it is not an IT only responsibility. It’s an organizational lift. The way that we’ve approached it, really is in six core buckets in my view. I’m sure my general counsel and my chief quality officer and chief medical officer would add quite a bit here, but what is AI allowed to do? Which use cases are approved? What is restricted ver
What data can AI access? Blaine, you touched on it. It can’t just be a free for all. What are allowed? PHI, PII, proprietary, public? Is it encrypted? Is it pseudonymized? Is it anonymized? What data can be used to train models, et cetera? Number three, who’s accountable? Who owns the use case? Who signs off? In our case, a lot of people sign off before anything goes to production, whether it’s me, my CISO, my chief medical officer, my head of quality data science, certainly our partner general counsel, Laura Misztal, who’s amazing and has really helped bring a lot of this stuff to life. Number four, how risk is managed and Blaine touched on that, so I won’t belabor it, but it is about risk management. How do you check all of this? How is AI audited and explained? We have to explain our models. Can the outputs be traced back to the inputs?
You know, we can’t just throw a model in production and go, trust us, it’s in there. I mean, you’ve got to go back and audit and explain the quality of code and the model. So it needs to be explained to customers, regulators, potentially courts, are things logged and reviewed. And then lastly, and I think this one gets a lot of oversight. How are your vendors or your partners and other third parties utilizing AI?
Is it embedded in purchase tools? Are you overlooking that? How do they handle your data? We are held accountable in many ways to our third-party partnerships. So if a customer comes to us, there’s an expectation that we’re managing that as well. So I would say this all sounds like heavy lifting and like a lot of red tape. I promise to all of the listeners out there, if you set up these guardrails right at the beginning or toward the beginning,
it will make your life so much easier. And what we have found is that it actually speeds things up. It speeds up innovation because there’s a sense of partnership throughout the organization and certainly with our customers that we’re doing AI correctly.
Adam Dennison:
So that was going to be kind of my follow-up question. It does sound like a heavy lift. And again, I’m an end user, right? I’m not an IT leader. Are you able to position it as we can, we do this and we can be more innovative rather than just the risk mitigation aspect of it, which some people might be like, you know what, Jay, we’ll get to that. I want to do the cool stuff.
Jay Ferro:
Yeah, it’s funny you say that. When I joined in 2020, we had a handful of models in production. We got an early start in AI back in 2018. We had our first scientifically validated algorithm. And then we’ve expedited it since then. We tend not to look at model count much anymore. We look at more at impact beyond just, hey, we have 140 scientifically validated models in production.
That’s nice. It shows growth. But I’m more into interested in impact. The reason we can do that and the reason we’ve had such explosive growth is because we did it the right way and because we’ve brought everybody along with us. It wasn’t Jay with a wild idea going, trust me, it’s going to be great. And I’ll throw it to you once a quarter and you’ll review it. was forging partnerships with our general counsel, our chief medical officer, science ... my CISO. And at the beginning, it requires a little bit of upfront work, but once everybody gets comfortable and has a how we can, not why we can’t mindset, it speeds it up, Adam. It actually, you pay a little bit of a toll at the beginning, but you more than make up for it along the way.
Adam Dennison:
Blaine, what about your side, you position these types of discussions with your business partners?
Blaine Carter:
You know, I’m going to build on something that Jay mentioned there. He says, you pay a little toll. I would like to maybe rephrase that as you put a deposit down on a further, you know, payout that you get in the future. And I think this is one of those places where you have to put in a deposit, right? We actually teach a course around what we call the speed of trust. And I think AI’s trust has historically been very low.
Jay Ferro:
I like that better. Yeah, that’s better.
Blaine Carter:
You know, people say, you know, how many hours are in strawberry or how many, you know, how many states in the United States have a letter art and, and it hallucinates and it tries to be helpful. But historically, especially at an enterprise level, that trust level for AI in general has, has a lot to be desired, but it doesn’t mean it’s something you can ignore. what Jay was saying there where you have to, know, you have to make that deposit upfront. I think that goes a long way with stakeholders to say, you know what, we’re not going to put our head in the sand. We’re not going to, we’re not going to ban AI.
What we want to do though, is we want to build a trusting relationship with our business units and legal compliance to make sure that we’re all on the same path together. I think, as we talked about this, I think that’s one of the highlights to look for in your company is that as a CIO, they can bring in a Gartner or a consultant and they can tell you everything about the industry. But the role of that CIO is to understand both the business and the technology.
And that’s where you can’t have a consultant totally replace IT leaders, because you’re the one that really does sit in right at the crest of that wave of making sure that you have understanding of both sides. And that way you can have that trusted relationship with both the business and keeping abreast of all the compliance and things like that. It’s not an easy job, but it’s your job. And if it were easy, then everyone would do it. And so that’s, think where we can provide a lot of value as IT leaders is to be able to.
Surf that wave, keep that trust going and help everyone see the vision of where we can, as a company, use AI to better, again in Jay’s words, to have that impact. Because I think that’s really where AI shines is where it’s able to have an impact on something, whether it’s your business internally or whether it’s with your customers.
Jay Ferro:
Yeah, otherwise why are you doing it, right? Otherwise, it’s a weird science project that only guys like you and I care about.
Adam Dennison:
I have another follow-up question, then I want Samara to jump in with her questions. What sort of processes do you have in place for monitoring AI models and usage? Do you have tools in place to do this? Is it more manual? I know Jay, you said it a little bit ago that there’s some manual pieces here where everyone takes a look at something as it’s being rolled out or used. What’s that balance look like between tech and manual oversight?
Jay Ferro:
Yeah, it’s a combination of both. We regularly guard against hallucination, bias, those types of things. A lot of our models tend to be closed at some point so that you don’t want to continually refine a model throughout the clinical trial process naturally as you’re collecting data over time. So you tend to close the model so that it’s consistent, at least through that particular trial which is super helpful. But we have an army of data folks, data scientists, domain experts that help us along with this. And so it’s a combination of both, Adam.
Blaine Carter:
I can say from my perspective obviously, we’re dealing with what we call important information, but it’s not clinical data, this healthcare information. And so we, as a company, we evaluate our risk versus how quickly we want to move on things. And so for FranklinCovey, we are taking these models and we’re doing what we call grounded. I think it’s a basic AI term that is starting to take a lot of hold is making sure you ground your models in relevant data so that you keep them from having to rely on inference and even worse yet, web searches to be able to fill that in. And so for us, it’s making sure that the model you’re using is very much aligned with the use case you want. And so for a lot of people, if you’re looking for, as an example, a general knowledge bot where you’re like, I need to be able to ask a multitude of questions and get a wide variety of answers back, you’re necessarily going to have a risk there because you’re having it be such a wide-based model where it starts to become very narrowly focused. And what I find truly valuable is when, like Jay was saying, when you’re really able to narrow that model down to perform a specific action, and to doing that, you get a much deeper level of understanding from that AI model. And so we want to make sure th
Jay Ferro:
And it really varies on the type of model that it is. I mean, you have always on monitoring, have input-output validation with kind of guardrails in production. Certainly with us, when it comes to the clinical trial models, we have a human-in-the-loop review, which is consistent. Now, that doesn’t apply to things like that are operational, you know, hey, we’re doing document analysis for internal or reviewing contracts, that kind of thing. Yeah, you have human oversight, but you’re not really touching a clinical trial process. So there’s slightly less regulatory hurdles or burden when it comes to things like that. And then you have periodic revalidation and recertification of those models, which we’ve defined, depending on the model, a regular cadence, sometimes quarterly, sometimes monthly, sometimes event-based based on a new data source, whatever, where you’re saying, okay, every quarter we’re going to revalidate and recertify the model.
It’s going to include balance or bias, fairness checks, performance comparison versus baseline, that type of thing. So it really depends on, and I think Blaine said it, the use case, the type of model that it is, and the regulatory burden that you have on that particular model.
Adam Dennison:
Thank you. Samara, I want to get you into the discussion here.
Samara Lynn:
Yeah, sure. know, Jay, can relate to you. Back in the early 2000s, I was an IT director at a health care organization. And I remember the administrator handing me this binder of HIPAA regulations. And I was completely overwhelmed. I’m like, I have no idea what to do here. I know you and Blaine are in two very different verticals. But what three steps would you say a midmarket leader who does not have the experience in governance or regulation, what can they take now to start their journey on AI governance?
Jay Ferro:
I think to get started, don’t worry about boiling the ocean right out of the gate. Acknowledge reality. I think you have to assume, if you’re just getting started, that your employees are using ChatGPT and Co-Pilot, embedded AI and tools every single day. I think identifying owners within the organization and finding allies.
So if you’re the CIO or you’re the VP, you’re the head of IT, the CTO, the CDO, whatever, or you’re in his or her organization, assign a clear owner and find some counterparts within the organization. So with us, I’d say four years ago when I got started, it was the general counsel. It was our chief medical officer.
So was Lauren Misztal, Todd Rudo, our head of quality guy named Todd Kisner, and a number of others, Murtaza Nisar our CISO, and we have flexed since then based on domain knowledge that we’ve needed. And of course, now our chief AI officer, Marko Topalovic, who really is kind of the day-to-day point person on quite a bit of this. So get started somewhere and don’t sacrifice good for the sake of protection, perfection rather.
I know the next thing that somebody is going to tell me is, well, you got to get into data classification and all that stuff. That’s fine.
I think a rudimentary AI use policy is approved: what is restricted, what is prohibited, let employees know that you’re working on it and continue to refine it. I think if you get started, it picks up speed pretty quickly. Ironically, it’s not the technology that is the heavy lift right out of the gate. It tends to be just getting started and putting some sort of guardrails around things. And then it picks up speed, I think, pretty quickly.
Adam Dennison:
Blaine any suggestions you’d have for someone to get going?
Blaine Carter:
Yeah, I like a lot of Jay’s suggestions that I would align with a lot of those. One of the things we did very early on is set up basically to what we call steering committees. One is from a compliance regulatory legal. How do we wrap our arms around the risk of AI? And number two is a little bit more of a broader committee, and that’s around what use cases make a lot of sense for your particular company. And a lot of times you would think that, you know, the executive team and the high-level leadership are the ones that can really direct that. But what we found is that a lot of our best use cases were derived from people who feel the pain day to day. It’s those individual contributors; those frontline managers are the ones that have had a lot of the great ideas because they’re the ones in the trenches day in and day out. They’re the ones that really would say, you know what, I use ChatGPT to write my mom a birthday card. I can see a little bit of what it can do with just that limited knowledge; they can see applicability in their organization. so being able to help operations wise is helping with those committees. And then obviously, one of those steps is to, where is the risk appetite for your company? Are you a startup where you’re able to take a lot of risks, you’re able to fail a lot, that’s acceptable in your culture? Or are you highly regulated where you have to really think before you leap in a lot of cases? And then it does kind of change the way you model that. And so I would make sure that you’re matching up your company’s culture and infrastructure and resources with the pace at which you can do AI. I think I’ve seen a lot of failed AI products, not because the AI failed or something else. It’s that the use case was not clearly defined upfront. And so people said, oh, we’re going to solve 80 percent of our operational problems with AI. And that was as deep as it got into the definition
Those are really some of the steps I would make sure you’re taking is really, really dig into your use cases and really make sure this is a team sport. As an IT leader this is not yours to carry alone. In fact, it should never be yours to carry alone. You really should have a team that can help you succeed and get across the finish line there.
Jay Ferro:
I love it. And once you have all of this in place, classifying your AI use cases by risk and Blaine nailed it perfectly. But don’t you don’t even have to get into like some sort of arcane 80-page manual right out of the gate. Start with simple tiering, low, medium, high risk. Samara, you talked about my industry. I mean, we know what we can’t do, right? So it’s very clear in what we can do is very highly watched. And so with us just getting started with, okay, we can’t use that. We don’t want to touch that. So what can we do? And then getting into
Locking down appropriate data so that it cannot be used or inappropriate data so that it cannot be used with model development or AI and all of those things. I sat with a CIO of a mid-market organization not too long ago and they were, we were at a round table. This is maybe nine months ago. So wasn’t like it was the beginning of the AI revolution.
And we’re all going around kind of where we are. And they said, gosh, we’ve spent nine or 10 months just kind of developing our reasonable use policy. I’m like, I see Blaine’s reaction was mine in the room. was like, you got to be, the irony is you could have had ChatGPT create one for you in like 30 seconds. How in the heck are you taking nine months to just create one? I mean, have it 80 percent correct and tweak it weekly if you need to, but start. And I think, Samara, going back to your comment about being in IT, I think one of the things that works against us is we are very binary at times. I mean, it either is or it isn’t. It’s a one or a zero. It’s either perfect or it’s not. And we’ve got to be okay with navigating a little bit in the gray and then pivoting quickly.
Adam Dennison:
So, a few minutes left. You two are way further along than much of our audience. I think that’s apparent. You’ve got your good guiderails set up, but there’s still employees at your organization. Do you have any interesting mishaps that have happened, things like that? How often do you have to say, hey, get back within these guide rails? How often is that popping up?
Blaine Carter:
Jay, I’m sure you have the stories from that as well. I will give one small story that IS interesting. We were developing a model for our coach. It’s a coaching platform we’re doing and we were we were going through and we had grounded it and all of FranklinCoveys intellectual property and we had added all this information in there and we’re like, hey, this coach is doing really well at, you know, running through scenarios with people and giving him, you know, guidelines and things like that. And we were basically a week from going live with the coach and the team said, you know what? want to let’s just do one more review on there. And they went back in and they said, let’s look at this model one more time. And they realized that the model they were ready to go into production with was not the model that was grounded in any of the stuff. All the work had been done. One developer had changed one thing and we were ready to go live with a model that was going to wildly awful results ... all of the awful things you hear about AI, we were like one button click away from going live with that model rather than the one that had all the compliance work done. And so sometimes it’s the little things that you, you know, that catch you up on there. And it’s, and you know, even though you have all the structure in the world, and you have all the regulatory compliance, everyone had signed off on it. Sometimes it’s that little thing that catches you that you weren’t expecting. And so that would be one story that we’
Adam Dennison:
That person got a raise and a promotion.
Blaine Carter:
Right? Was a lot of back padding there on that one.
Jay Ferro:
Luckily, we haven’t had anything catastrophic. We’ve had had the usual, the people who bring financial models that they ran through co-pilot or something like that where it made errors and, or, or had it rewrite something that was awful. Luckily, nothing customer facing or anything like that. But you’re like, that can’t be right. And I go, well, I ran it by co-pilot. And you’re like, dude, what are you doing, man? You got to double check your work.
Adam Dennison:
Try again.
Yeah, we heard an interesting one from one of our MES members who happens to work in the legal field and an attorney wrote an opinion via chat GPT and it got, that didn’t go over too well.
Jay Ferro:
No, no, no, no, no. That’s not a good thing. That’s not good.
Adam Dennison:
That’s gotta be in the original policy somewhere.
Jay Ferro:
Yeah, that’s Hall of Fame bad. Yeah, that’s bad.
Adam Dennison:
So all three of you, any parting comments before we close? This has been great. And again, I think you two are much further along. We’ll see you at our events and I’ll tell people that aren’t that far along to seek you out next year on MES.
Jay Ferro:
Please do. Hey, listen, we all got to stick together in our space. know Blaine and I are really shy guys. So we, you know, we are really hard to get to open up, but no, please, yeah, have them seek us out. Would love to share war stories and help in any way.
Blaine Carter:
Absolutely. think Jay’s right on that. think also what Jay said, like, don’t let perfection be what’s stopping you from getting started. I think this is a we’re all trying to be in this together. Like I think Jay and I would be very open to sharing what’s worked, what hasn’t worked. There’s no secrets here. We’re trying to gatekeeper anything like that. I think we could all win on this together.
Jay Ferro:
I agree.
Adam Dennison:
Even Mike from Gartner at our one of our sessions said, you know talking about the data getting ready for AI is like it’s not gonna be exactly perfect Like get it to where you’re comfortable and get started. So you’re not the person that is ‘it’s 10 months and I haven’t even written a simple policy.’ But Jay, Blaine, thank you so much. Samara and I really appreciate you joining us again for our final Ready.Set.Midmarket! for the year and we will absolutely see you in the new year and thank you so much for your insights here.
Blaine Carter:
Congratulations. That was a big win for you guys. A good year ahead and congratulations on the year so far.
Jay Ferro:
Thank you all, I really appreciate it. Happy holidays.