95 Percent Of Enterprises Hit With AI-Related Incident: Survey
For most of the companies, those incidents resulted in ‘substantial’ to ‘extremely severe’ consequences for their organization.
Ninety-five percent of enterprise executives report experiencing a problematic incident related to the use of AI in their organization.
Infosys’ Knowledge Institute team surveyed over 1,500 business executives and senior decision-makers across Australia, France, Germany, UK, U.S., and New Zealand for its report, “Responsible Enterprise AI in the Agentic Era.”
An overarching finding of the report is that while organizations are eagerly adopting AI, they aren’t optimizing the implementation of AI, leading to negative consequences for their organization.
[RELATED: More Organizations Adopting GenAI, But Hurdles Remain: Survey]
Three-quarters of the executives whose organizations experienced an AI-related incident, said the consequential damage from the incident was “substantial.” Thirty-nine percent said the damage was “severe” or “extremely severe.”
AI-Related Incident Aftermath
Survey respondents cited some of the fallout their organizations faced after an AI-related issue:
- Privacy violations
- Systemic failures
- Inaccurate or harmful predictions
- Ethical violations
- Lack of explainability
- Bias or discrimination
- Regulatory noncompliance
- Security breaches
However, most of the damage that arises from an AI incident is usually due to financial loss, according to the report. Seventy-seven percent of those surveyed said an AI incident led to lost revenue and increased costs.
[Related: How Much Are Organizations Spending On AI? A New Report Sheds Some Light]
“The average company in our sample reported financial losses from enterprise AI incidents of about $800,000 over two years,” the report states.
Implementing AI Responsibly Is Key
Responsible AI (RAI) best practices can provide a sturdy framework upon which organizations can implement AI and lessen the chance of AI-related problems happening, according to one Infosys senior executive in a news release.
[RELATED: 10 AI Policy Templates You Can Use As A Framework]
“Drawing from our extensive experience working with clients on their AI journeys, we have seen firsthand how delivering more value from enterprise AI use cases, would require enterprises to first establish a responsible foundation built on trust, risk mitigation, data governance, and sustainability. This also means emphasizing ethical, unbiased, safe, and transparent model development. To realize the promise of this technology in the agentic AI future, leaders should strategically focus on platform and product-centric enablement, and proactive vigilance of their data estate. Companies should not discount the important role a centralized RAI office plays as enterprise AI scales, and new regulations come into force,” Balakrishna D.R., EVP, global services head, AI and industry verticals, Infosys, said about RAI practices.
While 78 percent of executives surveyed believe that RAI will add to revenue growth, only 2 percent of companies meet “key responsible AI standards,” according to the report.
“Today, enterprises are navigating a complex landscape where AI's promise of growth is accompanied by significant operational and ethical risks. Our research clearly shows that while many are recognizing the importance of Responsible AI, there's a substantial gap in practical implementation. Companies that prioritize robust, embedded RAI safeguards will not only mitigate risks and potentially reduce financial losses but also unlock new revenue streams and thrive as we transition into the transformative agentic AI era,” Jeff Kavanaugh, head of Infosys Knowledge Institute, Infosys, said in a news release.
The report found that most organizations were not meeting responsible trust and risk mitigation standards with their AI implementation. A mere 5 percent had an appropriate level of human insight. Only 4 percent had adequate safety measures in place.
The news wasn’t all bleak. Among those surveyed, 83 percent of organizations are “validating AI models, generalizing them, and reducing bias.” Sixty-two percent are monitoring what impact their enterprise AI initiatives have on the environment.
Read Infosys’ full report here.