This Company Wants To Make AI Safer For The Workplace

Rick Caccia is the CEO and co-founder of WitnessAI.

There’s plenty of evidence that AI use is increasing in the workplace. A February 2025 report from the Federal Reserve found that 20 to 40 percent of workers are using AI. Another survey from Gallup revealed that AI use at work has doubled in just the past two years.

While IT leaders and company executives are encouraging the use of AI to cut costs and transform business, CIOs, CISOs, and other IT executives are concerned about the unmonitored use of AI in their organizations and potential risk to the business.

That’s according to Rick Caccia, the CEO and co-founder of WitnessAI.

WitnessAI is a San Mateo, California-based cybersecurity company that provides a solution to secure the use of AI in the workplace.

Despite the company’s new standing in the security industry, Caccia has an impressive pedigree. He most recently served as a senior vice president at Palo Alto Networks, and prior to that held positions at Google Cloud, Exabeam and other tech companies.

MES Computing spoke with Caccia on how WitnessAI can help IT leaders get a rein on AI usage in their organizations and toe that line between allowing staff to use AI to boost productivity while keeping the organization safe.

Talk a bit about the need you saw for a solution like WitnessAI.

When ChatGPT hit the scene in 2022, I was working at a large company, and we went through what every other company went through. The CEO gets up in front of the company and says, ‘[AI] is going to be amazing. Go and become an AI-enabled marketer, developer, whatever. And everyone ran away and did.

And then two weeks later, the general counsel got up and went, ‘Oh my god, the amount of information you have all leaked to this [AI] app is unreal. You’ve leaked our source code and our financial plans and our unreduced earnings.’

Now we’re trying to figure out what to do. How do we give our employees access so they can be super productive, but how do we do it the way where we don’t leak customer data, we don’t leak our source code?

How does WitnessAI address that issue?

The software platform does three things:

First, you connect it to your network, and it shows you where all the external AI apps your employees are using are. Again, there are literally thousands of these things. It’s not just ChatGPT, but Grammarly has AI embedded in it. Salesforce has AI embedded in it. Everyone’s using Microsoft Copilot. So, the first thing it does is it tells you, here are the AI apps and bots your employees are using.

(WitnessAI screenshot)

Then it collects the actual conversation, so it can say, ‘Here’s what they’re doing.’ And the point of that is to look for risky stuff. We saw with one company—a payment card company — one of their employees in customer support had taken an email back to another company with 100 live payment card numbers and had posted it into ChatGPT and said, ‘help me with the grammar of this email, and then send to the customer,’ not thinking that all these live credit card numbers are now leaked.

The second thing it does is it lets you apply intention-based policy. You could say, ‘well, my legal department is allowed to use third-party AI to write contracts, but my marketers aren’t. This group of developers can use cursor to write code, but this other group can’t .’

Then we have a bunch of data user protection pieces so we can redact sensitive information, like customer data being leaked. It helps you implement some policy-based controls, and it helps you protect your data and your people

People don’t spend reactive security dollars. What they spend is money on compliance and governance and with AI. That’s where we are today. Companies aren’t getting their AI attacked because a lot of companies, most companies, haven’t stood up any AI apps. The state of the market isn’t protecting me from crazy AI ransomware. It’s what policy should I have so my employees can be productive?

How does the software flag an AI usage threat?

You can be in ChatGPT in your browser, and we can send warning messages back into ChatGPT. So it could say, ‘Sorry, that violates policy. Don’t put customer data in this.’ Or you could even, alternatively, redact, so you can say, ‘Look, I let this go through, but I redacted the customer’s social security number.”

What are some other features?

[Customers] have reporting visibility, they can do analytics. You can start to say things like, here are the top questions people in your company are asking [AI], and here are the risky questions they’re asking, and here are the groups that are using it the most, and here are the groups that are using it the least, and here are the groups that are using it in the riskiest way. Here are the apps they’re using.

What we find is people have tried to do this with their existing security products, and those things don’t understand where the AI is, so they don’t classify it correctly.