Token Security Develops Free Tool For Organizations To Assess AI Agent Risk
MES Computing spoke with the co-founder and CEO of Token Security about its new and free tool, AI Privilege Guardian.
(Token Security co-founder and CEO Itamar Apelblat)
Agentic AI security and risk assessment are poised to become significant focuses of midmarket IT leaders in 2026, particularly for organizations that have not yet implemented strong AI security guardrails.
Thirty-three percent of enterprise software applications will have integrated agentic AI by 2028 and at least 15 percent of daily work decisions will be made using agentic AI by then, Gartner predicted in its report on agentic AI earlier this year.
[RELATED: 5 Predictions About Agentic AI From Gartner]
However, wide adoption of agentic AI will also mean taking a close look at the security risks AI agents can bring into an infrastructure.
“As the agenticness of AI systems increases, hard-coded restrictions may cease to be as effective, especially if a given AI system was not trained to follow these restrictions and thus may seek to achieve its goals by having the disallowed actions occur,” OpenAI cautioned in a 2023 whitepaper.
“If we are allowing AI in these agentic AI workflows to kind of automate the decision-making process, there could be exploits at that point,” Protect AI CEO Ian Swanson told MES Computing in a recent interview.
There are tools emerging on the market to assist in particular, smaller and midsized organization in assessing their AI agent security.
MES Computing spoke with the co-founder and CEO of Token Security, Itamar Apelblat, about a new and free tool, AI Privilege Guardian, which the company created to assist organizations in assessing AI agentic security and risk.
Talk a bit on the background of why your company developed this tool
We saw there was a new class of identities. Non-human identities, service accounts, and machine accounts. Enterprises are adopting more and more AI agents and those agents have their own identity. But we saw a challenge—many security and enterprise teams are not fully adopting those agents because of the fear of the access and permissions that those agents will have.
What are some of the risks these agents pose?
Imagine now that I have a cloud cost optimization agent and it starts to delete some files. [Agents] can take actions that are goal-oriented and sometimes they can do wrong things.
There’s a big fear right now in the security community that those agents will just do wrong, or even an attacker will use those agents in order to get to important resources.
[RELATED: Ready.Set.Midmarket! Podcast: Security and Best Practices For Implementing Agentic AI]
How does AI Privilege Guardian work?
We created this interactive tool – if I want to create an agent, I will define the purpose and the goal of this agent, and the tool will generate permissions according to the goals and according to the purpose of this agent.
Until now, when we thought about permissions we thought about the past as a way to understand what will happen in the future. So, we look at a machine account and say, “This is the action it did in the past year, so that is the action it will do in the next year.”
But now it’s a bit different with AI agents. You have those non-deterministic, goal-oriented services. So, [permissions] have to be defined but what they are actually meant to do and that's what we did.
We’ve spoken with quite a few companies and startups that have developed their own products for securing agentic AI. What’s different about your tool?
In a larger perspective, what we support better than our competitors is the scope of where those AI agents are running and the other thing is [providing] another way to take actions on those agents, so remediation is one.
But two, is a bit out of the box because usually we, and also our competitors, are very focused on the security teams that are usually being very reactive.
But what we thought is how can we the enterprise a tool that will help them from the get-go have much more secure permissions.
Think about it as security by design: when I am creating an agent, I will work with this tool in order to assign much more restrictive permissions based on the goals of the agent.
Is AI Privilege Guardian something to deploy before you create AI agents or can you use it with agents already in your infrastructure?
It could work for both sides. Because basically it gives me an agent and I will tell you what permissions and access it needs. It could be when I’m designing and thinking about this agent, and it could be after I created it.
We created this free tool for the entire market because I think everyone should at least understand what questions they need to ask and what they need to assess when they’re building AI agents. In front of us is probably one of the greatest technological transformations in our lifetime.