5 Rules To Getting Started With AI Governance

Two midmarket IT leaders share their views on implementing AI governance.

With many AI projects predicted to shift from pilots to production this year, it’s time for midmarket IT leaders to create a solid strategy for AI governance.

As of Q4 2025, 75 percent of organizations had begun implementing AI, according to a report by research firm Gartner.

“But AI adoption and the rise of agentic AI — which can act autonomously — have surfaced ethical and business issues, from social responsibility and fairness to safety and sustainability,” Gartner also advised.

Which is why AI governance is a crucial component of AI integration.

AI is still such a new technology for organizations, especially for the midmarket. How do IT leaders begin their governance journeys?

Two midmarket leaders shared their advice and experience with AI governance in MES’ podcast for midmarket IT leaders, Ready,Set,Midmarket!.

Jay Ferro, who is the chief information, technology and product officer at Clario; and FranklinCovey global CIO Blaine Carter offered a few general guidelines for other midmarket tech leaders on AI governance.

5 Rules For Implementing AI Governance

“If you and your organization didn’t have a great data governance program, chances are AI governance is going to be a very difficult task because really AI is basically just making data accessible in a way that’s much more humanly consumable,” Carter said.

To properly install AI governance, focusing on the data that AI models are trained on, is a must, according to IBM. That includes understanding the “origin, sensitivity, and lifecycle” of data in your organization, IBM advised.

AI’s access to your corporate data “can’t just be a free for all,” said Ferro.

“What is allowed? PHI, PII, proprietary, public? Is it encrypted? Is it pseudonymized? Is it anonymized? What data can be used to train models, et cetera? Who’s accountable? Who owns the use case? Who signs off?” he added.

Good AI governance mandates strong reins on the data AI can access within your organization.

“If an AI model relies on sensitive data, you don’t want just anyone poking around in that data. Establish role-based access controls (RBAC), multi-factor authentication (MFA), and audit logs to track data access. AI systems should also be monitored for unauthorized data usage—because even an algorithm can inadvertently expose or access unauthorized data if not properly monitored,” advises the Project Management Institute in a blog post.

“I think it’s a basic AI term that is starting to take a lot of hold is making sure you ground your models in relevant data so that you keep them from having to rely on inference and even worse yet, web searches to be able to fill that in. And so for us, it’s making sure that the model you’re using is very much aligned with the use case you want. And so for a lot of people, if you’re looking for, as an example, a general knowledge bot where you’re like, I need to be able to ask a multitude of questions and get a wide variety of answers back, you’re necessarily going to have a risk there because you’re having it be such a wide-based model where it starts to become very narrowly focused,” Carter said.

“I think you have to assume, if you’re just getting started, that your employees are using ChatGPT and CoPilot, embedded AI and tools every single day,” Ferro said.

“If you’re the CIO or you’re the VP, you’re the head of IT, the CTO, the CDO, whatever, or you’re in his or her organization, assign a clear owner and find some counterparts within the organization,” he added.

“One of the things we did very early on is set up basically to what we call steering committees,” Carter said.

“One is from a compliance regulatory/legal. How do we wrap our arms around the risk of AI? And number two is a little bit more of a broader committee, and that’s around what use cases make a lot of sense for your particular company. And a lot of times you would think that the executive team and the high-level leadership are the ones that can really direct that. But what we found is that a lot of our best-use cases were derived from people who feel the pain day to day. It’s those individual contributors; those frontline managers are the ones that have had a lot of the great ideas because they’re the ones in the trenches day in and day out,” he said.