‘Shadow AI’ Raises Concerns Among Cybersecurity Professionals
However, a new survey shows security pros may be among the most active shadow AI users within their organizations.
Security firm Mindgard conducted an eye-opening survey on the use of “shadow AI” across small, midsize and large enterprises. “Shadow AI isn’t a future risk. It’s happening now,” said Mindgard co-founder and CEO Peter Garraghan in a statement.
The unmonitored use of AI tools within organizations is a growing concern, according to over 500 cybersecurity professionals recently surveyed at the RSA Conference 2025 in San Francisco and at InfoSecurity Europe 2025 in London.
The survey, conducted by Mindgard, also revealed that nearly 25 percent of cybersecurity professionals said that they use AI tools like ChatGPT or GitHub Copilot within their organizations without any formal oversight.
A recent report from research firm Gartner also warned about the potential risks associated with shadow AI.
“CISOs must define a robust program of education, monitoring and filtering to encourage innovation while mitigating shadow AI risks,” Gartner said in a summary of its report.
Shadow AI use can introduce risks to sensitive data with the potential of AI tools accessing that data.
Mindgard’s report offered other revelations about shadow AI in the enterprise:
- 56 percent of security professionals said AI is used by employees in their organization without formal approval
- 87 percent of cybersecurity professionals said they themselves are using AI in their daily work
- 32 percent said their organizations have formal AI controls in place
- 39 percent said no one in their organization owns AI risk
- 12 percent said they had no visibility into what is being entered into AI systems within their organization.
- 20 percent of respondents said they have used AI with their organization’s regulated or sensitive data
“AI is already embedded in enterprise workflows, including within cybersecurity, and it's accelerating faster than most organizations can govern it. Shadow AI isn’t a future risk. It’s happening now, often without leadership awareness, policy controls or accountability. Gaining visibility is a critical first step, but it’s not enough. Organizations need clear ownership, enforced policies and coordinated governance across security, legal, compliance and executive teams. Establishing a dedicated AI governance function is not a nice-to-have. It is a requirement for safely scaling AI and realizing its full potential,” Garraghan said in the statement.
Access Mindgard’s full report here.