Netskope Launches New Security Controls Over Protocol Used By AI

The new capabilities are within the Netskope One platform.

Netskope, a provider of secure access, secure edge (SASE) solutions, announced on Monday it has added new capabilities to its Netskope One Platform – its solution that offers a bundle of SASE, networking, and zero-trust security.

The announcement surrounds new security features for Model Context Protocol (MCP) communications. MCP is a standard used by AI to connect to external systems.

With the new features, Netskope One can now “protect MCP-enabled AI interactions by providing full visibility into MCP tool use, enforcing least-privilege access, securing sensitive data, and ensuring compliance,” the company said in a news release.

Some specifics of the new features include:

“Every team wants to confidently accelerate AI adoption, and emerging protocols such as MCP are now fundamental to that discussion,” said John Martin, chief product officer, Netskope, in a news release. “MCP also creates new security risks that legacy tools can’t solve. That’s why we’re further extending the market-leading capabilities of Netskope One to enable teams to see and create policies for MCP traffic and immediately assess how risky MCP tools are. This is critical to the secure use of AI as organizations develop agents to drive business productivity.”

The news comes amid increasing security concerns over AI in the enterprise.

Earlier this year, MES Computing spoke with Rick Caccia, CEO and co-founder of WitnessAI, about security issues and AI.

“When ChatGPT hit the scene in 2022, I was working at a large company, and we went through what every other company went through. The CEO gets up in front of the company and says, ‘[AI] is going to be amazing. Go and become an AI-enabled marketer, developer, whatever. And everyone ran away and did.

[RELATED: This Company Wants To Make AI Safer For The Workplace]

And then two weeks later, the general counsel got up and went, ‘Oh my god, the amount of information you have all leaked to this [AI] app is unreal. You’ve leaked our source code and our financial plans and our unreduced earnings,’” Caccia said.

Protect AI CEO Ian Swanson also spoke earlier this year in an interview about AI-related security issues

“If we are allowing AI in these agentic AI workflows to kind of automate the decision-making process, there could be exploits at that point,” Swanson said.

“I’ll give a risk example, and perhaps a mitigation. Risk is that an agent makes an uncontrolled or unexpected decision that might lead to a security failure. Example, that could be an AI agent is carrying out automated incident response tasks and it incorrectly shuts down a critical production server, and it causes downtime, so the AI thought something wrong was happening, but it made an unexpected decision, and maybe it shut down something that was super critical,” he added.

Even the AI companies are issuing warnings about AI’s overall impact. In September, Anthropic publicly endorsed a proposed bill in California that would regulate AI frontier systems.

The bill, SB 53 would have “an important impact on frontier AI safety,” Anthropic said in a blog post.

“With SB 53, developers can compete while ensuring they remain transparent about AI capabilities that pose risks to public safety, creating a level playing field where disclosure is mandatory, not optional,” Anthropic also said in its post.

Last year, current and former employees at the top AI companies signed an open letter calling for more transparency about AI’s risk, MES Computing’s sister site Computing reported.

A recent report found that 95 percent of enterprises surveyed were hit with an AI-related incident, which included privacy violations and security breaches.

By allowing admins to secure MCP communications, the new capabilities “introduce new control points for data governance and privacy that are crucial for scaling AI safely within the enterprise,” Netskope said in a blog post.