Ready.Set.Midmarket! Podcast: Security and Best Practices For Implementing Agentic AI

In this episode of “Ready, Set, Midmarket!” the discussion revolves around agentic AI, its implications for midmarket organizations, and the associated security challenges. Daryan “D” Dehghanpisheh, president and co-founder of Protect AI, shares insights on the rapid adoption of AI agents, the unique risks they pose, and the importance of integrating security measures as organizations navigate this evolving landscape. The conversation emphasizes the need for midmarket companies to adapt quickly, understand the implications of AI on labor, and develop robust security frameworks to manage the risks associated with AI deployment.
The full episode can be watched on YouTube, heard on Spotify and Apple Podcasts.
Previous RSM! Episodes:
- The Midmarket’s Tech Road Ahead In 2025
- Leading The Girl Scouts’ Technology
- Tackling Tech Debt With Ken Knapton
https://player.simplecast.com/b3a12df1-c374-4e65-953e-fce14aaa4b61?dark=fal
TRANSCRIPT:
Adam Dennison: Hello and welcome to another episode of Ready, Set, Midmarket!, MES Computing’s podcast for all things midmarket IT. I’m Adam Dennison, vice president of MES, and joining me today is my co-host, Samara Lynn. She’s our senior editor with MES Computing.
Samara Lynn: Hello, everybody.
Adam Dennison: And we have Daryan “D.” He is the president and co-founder of Protect AI and he will be our agentic expert, AI expert on the podcast today. Welcome, D.
Daryan Dehghanpisheh: Hey, thank you Adam. It’s great. An expert in this field has only been around for what? A couple of weeks, maybe. I’m kidding. So excited to talk.
Adam Dennison: Thank you, D. So one of the questions I have around agentic AI, you know, I read a research report recently said that 93 percent of IT leaders will be deploying AI agents within the next two years within their organizations. Obviously ReadySet, MidMarket, Samara and I, we cater to the midmarket. So my question for you is kind of around where do you see the midmarket playing in that 93 percent? Is it going to be, you know, fast-charging and led by the enterprise and they’re going to be following or how do you see them sort of playing in that space and where they would fall there?
Daryan Dehghanpisheh: Yeah, I think it’s about two things, right? I think it’s about dispersion, not adoption, like how fast does it get into other areas of the operations? Because adoption, can tell you right now in big companies, largest banks in the world are our customers, intelligence community components are our customers, DOD. We have, you know, finance, energy, manufacturing, tech, and there’s a kind of there’s not one outlier over the others in terms of their adoption of agents.
There are outliers in terms of them putting agents into production workflows. And the reason that I am going to pause there for a moment is let’s compare and contrast your question, which is really about the penetration and dispersion of agents in the midmarket versus kind of the historical context, recent historical context of GenAI adoption. If you were to ask that question a year ago, what would the answer have been? I don’t know, but I bet it would have been under called.
And it would have been under called because were they including things like co-pilots and developers? Were they including things like email editors or even the ability to have Apple intelligence on the phone to rewrite an email? Were they including those kind of copilot mechanisms in that GenAI adoption? Tough to say, right? But if you think about agents, as kind of like how are they going to be dispersed and utilized in the midmarket? Will it go faster or slower in my opinion? I think the diffusion and the dispersion of agents into the workforce is actually going to be faster. And the reason I think that is that small companies generally are going to be a little bit more nimble and able to adapt. Number two, I think that they understand that one of their biggest input costs is labor.
And in an economic backdrop and condition where we have so much confusion and unpredictability about hard good pricing, think about the labor that attaches to that hard good pricing. Where can you use agents to get greater productivity or elimination of things, frankly? I mean, there is going to be kind of a paradigm shift change where I think one of the terms you’re going to be hearing a lot in the workforce is not agents, but digital labor.
But that’s essentially what an agent is doing. It is doing the knowledge labor of other types of functions and jobs. And I think that the paradigm shift then implies naturally that if you are good at your job, agents in the form of digital labor are going to make you great at your job. If you are bad at your job, it’s going to replace your job.
And it’s going to do that through a variety of mechanisms. But fundamentally, that’s what I think is the difference between say an agent and what we’ve been calling RPA. Right? Think about how many midmarket companies have been using RPA. What’s the difference? The difference is autonomy and learning how to get, like, agents are like water, like water around a rock. It’ll find a way to go do something, just like a human will. But what happens when an RPA runs into a block? It throws an error and it halts.
That’s the difference.
Adam Dennison: Got it. So I want to just transition into some of the security questions. I want to get Samara in here as well, but my question around security is. Again, it’s early days, but in your opinion, are the good guys ahead of the bad guys right now or they kind of neck and neck or what do you think ... ?
Daryan Dehghanpisheh: I think it’s always neck and neck. When you’re talking about digital risks and security risks, it’s always neck and neck in my mind. There’s a great maxim about hedonic treadmills, right? And I think in the security space of AI, it’s no different than any other element. I do think what is different is the vulnerabilities, the exploits, and that vulnerability to exploit curve that is happening.
Because the assets are new, the assets are unique. Four years ago, nobody was talking about their model provider, right? They were talking about maybe their cloud provider, their infrastructure provider, but what that implies is that, you know, developer abstraction has raised, has gone up. People are writing to those models now. And as a result, you are taking in those models either from first party, you’re developing yourself, third party, you’re sourcing them from somebody else, and third-party commercial, which is in many cases, people are like, I’m just going to assume the responsibility of all these things is going to be done by the provider. That’s a fool’s errand. We have a philosophy and several tenets that we think of in terms of the differences around security of agents. We adhere to those tenets pretty carefully in terms of how we think about security of agents, the building of those agents, the deployment of those agents, the usage of those agents, and the reporting and monitoring of those agents. It all starts with security of the model that allows that class of software to become first AI-powered and second agentic.
Adam Dennison: Samara, how about you jump in.
Samara Lynn: Yeah, you made a good point about agents either accentuating or replacing human labor. I recently spoke with Ian Swanson, your co-founder, the CEO. Great interview. And he gave an example of agentic AI risk. He said risk could be an agent making uncontrolled or unexpected decisions that might lead to a security failure.
So, I think it’s important for IT leaders to know, is that ramped up by introducing agents to the infrastructure or is that reduced by potentially replacing error-prone humans?
Daryan Dehghanpisheh: I think it’s both, right? And I think the way we look at that is think about how we, you know, I keep using the term like a digital labor as a context, right? And a comparison construction. Think about what we do when we make a hiring decision about an employee. We do some background checks, we have an interview, maybe do some functional fit tests then they come into the environment. We give them some permissions. We maybe put them on an observability timeline. We kind of watch their performance. We see how they do. And if you see something funny, you know that old maxim, “If you see something, say something. You know you can’t actually predict with certainty what the agent is going to do each time. But if you think about how we manage humans, you’re just doing this on a volume and scale that is beyond anything you could possibly conceive, right?
So, the security risk isn’t always the classic kind of remote code execution or LFI, RFI type of construct that some security operators thinking of. It’s actually about an agent that might actually be making decisions and or taking actions that you might be OK with, but you haven’t had time to consider if you’re OK with that because the speed at which it’s going to happen, right?
So, it’s a different type of security risk and security paradigm. As we all say, every time that somebody clicks on a fake wire instruction as if it was assumed to be a phishing attack from the message of the CFO, that wasn’t in, the person didn’t intend to do that. Similarly, there might be somebody in the back room who’s committing wire fraud intentionally. Those are two different types of security risks, but they’re both security risks, right?
One is making an intentionally bad action. The other is a consequence of making a mistake. I think that what we need are security tools that blend how we think about contextual labor, like human labor, and put that into this digital equivalent of digital labor powered by agents. You’re going to need a whole bunch of things. When we think about security of software today, what’s one of the number one things we think of? Runtime security.
The problem with agents is that runtime security is not enough. So that’s a big tenant in terms of how we’re building our product, the research that we do, and how we think about it. So, it means it goes beyond essentially posture awareness. It’s not just about configuration mismatches or identity or permission rights and tools and data access. All of that has to be analyzed very, very quickly, surfaceable very, very quickly and traced very, very quickly. The second thing is that the speed and scale which agents can operate, they can operate much, much faster than humans, right? The speed and scale at which they can operate, it amplifies all risk, not just AI risk, but it amplifies risk of anything that that agent is touching, upstream or downstream. All the actions it’s taking. So, and as you give an agent increased autonomy, whether it’s this big or this big, your blast radius and higher likelihood of either a malicious attack or a consequential mistake increases exponentially. So, you have to have automation and application capabilities that are essential to not only detecting the risk, but alerting and or blocking the risk, right? The third is that context is critical because like go back to that wire room that I said, like is the context of like, hey, someone accidentally clicked on something because they thought CFO gave them that instruction.
That’s a different mistake than somebody in the back room committing wire fraud. So, you have to understand full context of what that agent is doing.
Last but not least, you need kind of closed-loop feedback. What’s the equivalent in the agent universe of an HR manager? Right? You’re going to have to think about that. And then last but not least, autonomy changes the paradigm in terms of what we think about software agents, management, and labor, which is that I’ve been saying on Capitol Hill and elsewhere that every worker becomes a manager. They’re just not managing people, they’re managing agents. So how do we help everybody who is a knowledge worker try to figure out how to better manage their agents? And so that’s kind of the big thing that we need to think about.
Samara Lynn: Are these agents mostly bundled with software platforms or is it something that a midsize business could download from an app store?
Daryan Dehghanpisheh: There are agent frameworks that are used. One of them is a company that we do a lot of partnership with. We share an investor with CrewAI. It’s an open-source framework that allows you to build agents from third-party models like from Hugging Face and others and really do some cool things. That thing has taken off. We’re really great partners with them. Strongly encourage your audience to check it out. If you really want to talk about agent development, those guys are fantastic. I can’t recommend them enough.
The second way to think about it is, okay, there’s companies who are building agents themselves internally. And the question becomes, what is the model that’s powering that agent? Is it something that they’ve built themselves or is it something that they’ve gotten from the open-source community? Are they paying a commercial model provider like an Anthropic or an OpenAI? And then last but not least, there’s kind of like agents built into your classic software solutions and SaaS applications that you have in the form of like, Salesforce Agentforce, which is a weird name. They’re an investor in us and I love them, but that’s a weird branding construct and other things that you’re going to see, right? So it’s really a question of like, where does that agent get activated and how is it developed? Those are the kind of the two end points.
Samara Lynn: Thank you.
Adam Dennison: I have a question; want to talk about the digital labor aspect, the people aspect of midmarket security teams. And I use that word teams lightly because in some of our midmarket ... the team might be one person ... And in our latest state of the midmarket research that we do with Gartner, think we were in the neighborhood of upwards of two-thirds or 70 percent of midmarket organizations do not have a CISO or VP of security. So, my question though is around, we’ve been talking for years around the talent shortage, labor skills, skills gap within, within IT. But when we look directly into again, a security team at a midmarket organization. How do you think this affects them being prepared for agentic AI from a threat perspective? What’s that ramp time going to look like for them? How can a midmarket organization secure themselves if they don’t have the proper teams and skill sets in place?
Daryan Dehghanpisheh: Number one, agree with the fact of like, if that scale and velocity of risks is gonna be up, the only thing you can do to try to mitigate that is to mitigate and kind of like throttle the number of agents that you’re going to deploy inside of your enterprise application. You’re never going to be able to stop people from using their personal devices to try to make their lives and jobs easier, right? So, I think it’s hilarious when everybody talks about shadow AI prevention.
It’s like, cool. I remember when every business in 2007 tried to block Facebook. It didn’t work out too well. So this is the same situation. Everybody who’s building shadow AI detection today, good luck. I think it’s an intractable problem, right? But to kind of pull that forward, it’s about the enterprise applications that you’re building and selecting. You need to be very, very methodical. about how you do that. So, you need the PMOs of your group and other elements to really come to the table to think about what could happen. again, think about it as, what would you say to your HR manager if you said, we’re bringing in this human who has a photographic, audiographic memory, speaks eloquently, remembers everything, sees everything, learns everything, and is just an amazing human, but what happens if they leave?
What happens if they walk out of the building? They’re smarter than our lawyers. They’re smarter than this. They’re smarter than everything we have. But man, they could really grow this business. You would take your time to weigh those risks of hiring that employee. You might still do it, but you might go like, you are way overqualified. So, the reality is you have to have a thoughtful process that brings more people to the table than just the tech teams, than just the business.
You’re going to need to bring in legal. You might need to bring in brand. You might need to bring in marketing. You have to have a different level of conversation around what that agent is going to do. How are you going to gate it and control it? Who is responsible for when it goes wrong? Is it the vendor? Is it the development team? Is it whatever? That’s the only way that you can do that to prevent the things from coming in. Now let’s talk about the things that you might want to be doing inside of your own operations. You’re going to need new tools. You’re going to need new capabilities. That is why startups and mega-tech are trying to work so fast to do this. Because this is not going to stop. We know that. And everybody’s going to get pressure to adopt it. We know that. And if we don’t take the proper things to kind of guardrail the internal development, the internal usage, and the monitoring elements, we’re going to have problems.
But name a company right now that is thinking about how to do that. I mean, we are, right? So are others. We’re not alone. And innovation has to solve that from the opposite end. And this is something I like to say, there’s hundreds of billions of dollars that have gone into the investment of AI, right? Just look at the recent headline of OpenAI, that monstrous rais, $300 billion valuation, right?
A fraction of that investment goes into the security of AI and we’ve gotten the lion’s share of that investment in the security of AI. But that’s a gap. When you have one innovation path and investment path that’s going like this and another that’s kind of going like this that gap creates problems. And those are the things that we’re going to have to have to come to terms with. There’s good news though and that is that you know I think the way that the individuals who act as departments in the mid and small markets, help is on the way, tools are available, but it involves three things. One, go slow, don’t hurt yourself. Two, bring more people to the table. Three, do not think that the existing vendors have your answers. They won’t, they don’t. You want proof? Everybody who’s been hit with an AI attack, you’re gonna tell me they didn’t have those big vendors as part of their security stack?
They 100 percent did. So how did it get missed? Because they don’t understand AI. Those large security vendors still don’t understand AI. What makes you think they’re going to understand agentic AI? They won’t.
Samara Lynn: It’s kind of like with social media, when it took off, like you said, in the mid-2000s, people didn’t understand the security implications and risks with it. It was just a frenzy to use it and to get it out there.
Daryan Dehghanpisheh: Or block it. They were like, ‘well, we’ll block it.’ So, what they did was they put a wall up thinking, huh, nobody’s going to go. Nobody’s going to get up this wall. And all the workers were like, yeah, you’re right. I’m just going to walk around it.
Adam Dennison: Yeah, dig under it. Well ... I want to be respectful of time. D, thank you so much for taking the time to join Samara and I on Ready, Set, Midmarket! this afternoon. Really appreciate that. Enjoy the rest of your week. We are at hump day right now, so the end is near. Samara, thank you again. And everyone, I hope you enjoyed the episode.
Daryan Dehghanpisheh: Samara, thank you. Adam, thank you. And to your listeners and viewers, thank you.