5 Predictions About Agentic AI From Gartner

Agentic AI is here and will continue its advance into work and daily life, experts say. But there are security concerns.

Agentic AI is to 2025 what GenAI was to 2024. It’s the latest artificial intelligence trend, and organizations are adopting agentic AI as part of their digital transformation goals.

According to the 2025 Connectivity Benchmark Report by MuleSoft and Deloitte Digital, 93 percent of IT leaders report intentions to introduce autonomous AI agents within the next two years, and nearly half have already done so.

Now, market research firm Gartner has released a new report predicting agentic AI adoption trends that extend beyond 2025.

5 Predictions On Agentic AI From Gartner

Gartner also outlined why there is such a frenzy for agentic AI. “Agentic AI addresses the limitation of traditional AI and GenAI, which tend to be passive and request-driven. Agentic AI, in contrast, applies AI inference to enable adaptive systems capable of independent action and decision making.

The potential of agentic AI to transform the way enterprises work and disrupt the technology status quo drives this trend.”

While there is an apparent desire to adopt agentic AI into work and daily life, some have cautioned about security implications.

Earlier this year, MES Computing spoke with Ian Swanson, the CEO of Protect AI, which offers AI and machine learning security.

Swanson spoke about one risk with agentic AI. “An agent makes an uncontrolled or unexpected decision that might lead to a security failure. Example, that could be an AI agent is carrying out automated incident response tasks and it incorrectly shuts down a critical production server, and it causes downtime, so the AI thought something wrong was happening, but it made an unexpected decision, and maybe it shut down something that was super critical,” he said.

“Now, the mitigation there is you can have [a] human in the loop, but you can also have tools like what Protect AI offers that can monitor these, that can put checks, that could put balances in there to make sure that [AI is] acting appropriately. So again, the risk was uncontrolled or unexpected decisions by the AI, and we have to figure out how we best mitigate that so it doesn’t do something that it should not do.”