Bengaluru: An Infosys Knowledge Institute (IKI) report recently unveiled critical insights into the state of responsible AI (RAI) implementation across enterprises, particularly with the advent of agentic AI.
The report, titled Responsible Enterprise AI in the Agentic Era, surveyed over 1,500 business executives and interviewed 40 senior decision-makers across Australia, France, Germany, UK, US, and New Zealand and found that while 78 percent of companies see RAI as a business growth driver, only 2 percent have adequate RAI controls in place to safeguard against reputational risk and financial losses.
The report analyzed the effects of risks from poorly implemented AI, such as privacy violations, ethical violations, bias or discrimination, regulatory non-compliance, inaccurate or harmful predictions, among others.
It found that 77 percent of organizations reported financial loss, and 53 percent of organizations have suffered reputational impact from such AI related incidents.
The report believes that the risks of AI are widespread and can be severe. The findings conclude that:
- 95 percent of C-suite and director-level executives report AI-related incidents in the past two years.
- 39 percent characterize the damage experienced from such AI issues as “severe” or “extremely severe”.
- 86 percent of executives aware of agentic AI believe it will introduce new risks and compliance issues.
- Responsible AI (RAI) capability is patchy and inefficient for most enterprises
- Only 2 percent of companies (termed “RAI leaders”) met the full standards set in the Infosys RAI capability benchmark — termed “RAISE BAR” with 15 percent (RAI followers) meeting three-quarters of the standards.
- The “RAI leader” cohort experienced 39 percent lower financial losses and 18 percent lower severity from AI incidents.
- Leaders do several things better to achieve these results including developing improved AI explainability, proactively evaluating and mitigating against bias, rigorously testing and validating AI initiatives and having a clear incident response plan.
Executives view RAI as “growth driver”
- 78 percent of senior leaders see RAI as aiding their revenue growth and 83 percent say that future AI regulations would boost, rather than inhibit, the number of future AI initiatives.
- However, on average, companies believe they are underinvesting in RAI by 30 percent.
- With the scale of enterprise AI adoption far outpacing readiness, companies must urgently shift from treating RAI as a reactive compliance obligation to embracing it proactively as a strategic advantage. To help organizations build scalable, trusted AI systems that fuel growth while mitigating risk, Infosys recommends the following actions:
- Learn from leaders:Study the practices of high-maturity RAI organizations which have already faced diverse incident types and developed robust governance.
- Blend product agility with platform governance:Combine decentralized product innovation with centralized RAI guardrails and oversight.
- Embed RAI guardrails into secure AI platforms:Use platform-based environments that enable AI agents to operate within preapproved data and systems.
- Establish a proactive RAI office:Create a centralized function to monitor risk, set policy, and scale governance with tools like Infosys’ AI3S (Scan, Shield, Steer).
The report quotes Balakrishna D.R., Executive Vice-President and Global Services Head, AI and Industry Verticals, at Infosys as saying: “Drawing from our extensive experience working with clients on their AI journeys, we have seen firsthand how delivering more value from enterprise AI use cases, would require enterprises to first establish a responsible foundation built on trust, risk mitigation, data governance, and sustainability. This also means emphasizing ethical, unbiased, safe, and transparent model development. To realize the promise of this technology in the agentic AI future, leaders should strategically focus on platform and product-centric enablement, and proactive vigilance of their data estate. Companies should not discount the important role a centralized RAI office plays as enterprise AI scales, and new regulations come into force.”
Jeff Kavanaugh, Head of the Infosys Knowledge Institute, is quoted in the report as saying: “Today, enterprises are navigating a complex landscape where AI’s promise of growth is accompanied by significant operational and ethical risks. Our research clearly shows that while many are recognizing the importance of Responsible AI, there’s a substantial gap in practical implementation. Companies that prioritize robust, embedded RAI safeguards will not only mitigate risks and potentially reduce financial losses but also unlock new revenue streams and thrive as we transition into the transformative agentic AI era.”