AI in the Midmarket: Why ‘Doing Nothing’ Is Now the Riskiest Strategy

The midmarket is agile enough to adopt AI faster than large enterprises. That’s a boon and a liability; FranklinCovey CIO Blaine Carter cautioned.

(Blaine Carter, Global CIO, FranklinCovey)

Artificial intelligence is already reshaping the way midmarket organizations operate – and how they are attacked.

Yet reckless adoption isn’t what’s putting midsized firms most at risk, rather it’s hesitation, advised Blaine Carter, Global CIO at FranklinCovey, during his keynote session Monday at the 𝗠𝗘𝗦 𝗜𝗧 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 𝟮𝟬𝟮𝟲 Summit in Ponte Vedra Beach, Fla.

For enterprises, that reality is already showing up. Carter described how a competitor’s stock dropped sharply after executives admitted on an investor call that they were “not really doing anything” with AI. The message was clear: AI strategy is now a business credibility issue, not just a technology roadmap goal.

For midmarket organizations, the expectation is the same—but with fewer resources and margins for error.

Focus Beats Scale When It Comes to AI ROI

One of the primary lessons from FranklinCovey’s experience is that AI value doesn’t come from broad, shallow experimentation. Letting everyone “play with AI” may build familiarity, but it rarely changes how work gets done.

Instead, Carter emphasized the importance of automation that is tied directly into business workflows. These are the initiatives that remove friction, reduce labor hours, and generate measurable return.

“What we found is that the broader and the more shallow your use cases are, the less likely they are to truly change your operational work. To get very deep and large ROI, your AI use cases need to be fairly focused and deep in automation and integration,” Carter said.

For midmarket IT leaders, this distinction matters. With limited budgets and lean teams, success depends on choosing AI projects that solve real problems rather than chasing after hype or FOMO.

Midmarket Speed Is an Advantage—and a Liability

Midmarket organizations can move faster on AI adoption than large enterprises. Decisions happen quickly. Pilots launch faster. But that same speed increases risk when governance lags behind experimentation.

Carter warned that employees are already using AI—often outside sanctioned systems. If IT doesn’t provide secure, approved tools, workers will route around controls using personal devices, public AI platforms, or unsanctioned accounts. Blocking AI outright doesn’t stop usage; it just makes it more covert.

“We had someone who was taking their phone, would turn AI on, hold it up to their work laptop to read meeting notes, and then have ChatGPT transcribe those into agenda items offline,” he said.

The result is shadow AI—and shadow risk.

Culture Starts at the Top

Another barrier to adoption is cultural, not technical. Many professionals feel uneasy admitting they’ve used AI for work, even for simple tasks like drafting agendas or summarizing meetings. FranklinCovey addressed this by having senior leadership openly share how they use AI in their own roles.

In fact, Carter asked the audience to raise their hand if after “the first time you used AI to do something at work, whether it was creating a meeting agenda or transcribing notes or developing a slide deck for a presentation, who felt a little bit guilty admitting they used AI to do it?” Hands were raised.

Some workers apparently feel some level of AI-use guilt.

“We actually did a survey of our customers, and they said that 75 to 80 percent of the people who first use AI to do some sort of work-related tasks, even if it wasn't a critical task for them, felt some sort of embarrassment about it, like they didn't want to admit to their colleagues,” he said.

The key to alleviating staff fears over their AI use is to have senior leadership lead by example.

“Our CEO as an example, will have a ... town hall and we'll get on there. One of the first things he does, he talks about how he's using AI on his role, and it's changed a little bit of the paradigm, because people are much more willing to say, well, if our CEO is using it and he expects me to use it, why would I feel embarrassed about using it? And so, we started actually highlighting use cases from individuals in the company using specific AI around how they're doing their job,” Carter said, which “changes the culture from the top down.”

Build an AI Lab, Not a Locked Room

To balance experimentation with control, FranklinCovey created an internal AI lab, allowing employees to submit ideas and test AI safely. Use cases are also evaluated by a steering committee based on business impact.

Projects that move forward are overseen with program management and accountability. The goal isn’t to explore every idea, but to identify the few that can improve how the company operates.

For the midmarket, this model offers a practical alternative to AI over‑regulation.

AI‑Driven Fraud Has Reached the Midmarket

Perhaps the most sobering takeaway came from a real‑world security incident. Carter described an AI‑powered impersonation attack targeting FranklinCovey’s Japan general manager. Attackers used voice cloning, WhatsApp messages, impersonated legal counsel, and urgent financial requests to attempt a $500,000 fraud.

This wasn’t a “spray‑and‑pray” phishing email, Carter said. It was a coordinated, multi‑channel attack powered by publicly available executive audio and open‑source intelligence.

And it worked. Almost.

The implication for midmarket leaders? Sophisticated AI attacks are no longer just launched against. global enterprises.

Verification Is the New Perimeter

The attack ultimately failed because of process, not technology. A built‑in verification mindset—call‑backs, independent confirmation, and internal code words for financial actions—stopped the transaction before money was involvecc.

As AI lowers the cost and complexity of impersonation, Carter argued that verification must become muscle memory across finance, IT, and operations.

Where AI Delivers Real Midmarket Value

Despite the risks, AI can prove its worth when applied correctly. Carter pointed to automating security questionnaires—some with “between 200 and 1,000 questions”—as a high‑impact use case, noting that AI could answer most with “90 to 95 percent confidence” while flagging the rest for human review.

Carter closed his session by stressing that there are great use cases for AI. But AI does introduce new risks.

By combining “human wisdom” with AI is “where you get the sweet spot of really driving business forward,” Carter said.