Shadow AI Is Already In Your Organization—Whether IT Approves Or Not

Blocking AI tools doesn’t work in the midmarket. From meeting note‑takers to Copilot rollouts, West McDonald breaks down why governance, not prohibition, is the only sustainable AI security strategy.

(West McDonald, founder, GoWest.ai, on stage at the MES IT Security 2026 Summit)

To Midmarket CISOs: Workers in your organization are using AI, whether you like it or not. And that inevitably introduces new risks in a security landscape already littered with threats.

However, draconian methods to quell AI use aren’t the right response, according to West McDonald, founder of GoWest.ai and a recognized authority on shadow AI, data privacy and AI governance.

During his Tuesday morning keynote presentation at the MES IT Security 2026 Summit, McDonald stressed that blocking AI apps is a near-impossible mission due to the sheer number of apps available and that over-regulating the use of AI is not an ideal solution.

Staff should be curious, not fearful about AI, he said. However, allowing workers to experiment with AI (within reason) should make IT teams savvy and more diligent about the security implications of doing so.

McDonald offered a few powerful takeaways about giving an organization the ability to transform with AI but also keeping the business safe.

Shadow AI is bigger than ‘unsanctioned’ tools

Shadow AI isn’t just about a worker downloading an app or using an LLM not approved by IT. It also includes unsafe use within approved AI tools like posting sensitive data in an IT-sanctioned app.

The onus is on IT to curtail unapproved AI use, and one of the best ways to tackle that is by having solid auditing.

“Most organizations do not have good audit trails for how AI is being used, when it was used, what it was doing,” McDonald said.

Blocking AI doesn’t work—people route around it

‘No AI’ policies and simple blocking fail in practice. Employees will keep using AI via phones, personal laptops, at home and through browser-based tools.

“They will find a way,” McDonald said about end users determined to use AI. Even if “AI may not be used in the office at all … it’s still being used. So [blocking AI] doesn’t really work,” he added.

AI meeting note-takers create ‘unvetted workflow’ risk

Probably one of the most common uses of AI in the workplace is for transcribing meetings. Auto-joining AI meeting note-takers are also probably one of the sources of a common work faux pax: sharing sensitive meeting transcripts and notes throughout an organization with those who need not view that information.

McDonald was very clear on how to handle AI meeting assistants: “Go back to your office and turn off the auto send for those meeting assistants in those rooms. Turn it off,” he said, and relayed an incident where a private meeting about an individual’s medical status was inadvertently leaked via a meeting assistant.

AI doesn’t create permission holes—it exposes them instantly

Tools like Copilot can surface data-access mistakes at scale. Clean up permissions like those associated with SharePoint before broad AI rollouts, McDonald advised.

Policies and guidelines must come before major AI rollouts

AI governance isn’t a “later” task. If you don’t have AI policies/guidelines, you shouldn’t roll out Copilot broadly or integrate it into enterprise content stores, period, according to McDonald.

Use a two-gate model: business value first, then security

Create a clear approval path: (1) business case hurdle, (2) security review to ensure the solution meets minimum standards.

Treat AI output as a draft—require human review for external content

While AI can help draft emails, contracts and customer-facing content, “AI output is always just a draft,” McDonald said. AI output must be reviewed and approved by a human before presenting it to customers, clients and/or the public.

Training should be ongoing mentorship, not one-and-done

Frequent, practical training builds safer habits and unearths emerging risks earlier than yearly training and compliance modules.

Instead of adopting the mindset of a trainer, adopt one as a mentor.

“Training is the thing we do on a weekend and go back to the office, forget on Monday. Mentorship is the thing that we do that helps hone our skills,” he said.

Vendor and app sprawl is accelerating—vet and standardize

The AI tool ecosystem is growing explosively, with confusing lookalike apps and cheap aggregators. Midmarket IT should standardize approved tools and vet exceptions.

"What’s the first thing that comes up when you type in [a search bar] ‘chat GPT?’ Is it chat GPT? No, it’s a sponsored application,” McDonald said that typically aggregates multiple LLMs and does who knows what with our data—creating potential risk and an example of ever-increasing AI app sprawl.

Agentic AI expands the threat landscape—not market-ready for most organizations

McDonald referred to agentic AI as “a whole other security landscape.” He said it’s not ready for prime time yet for most midmarket firms. Instead, for the time being, treat agentic AI as a controlled learning area and not as production-ready automation.

Finally, good AI policy should enable curiosity, not fear

Well-designed policies clarify what’s allowed and encourage safe experimentation. “The very first core value we have is curiosity,” he said. “We cannot kill it.”

Instead, policies and guidelines create a framework for curiosity. Fear-based monitoring narratives backfire.