ChannelLife Australia - Industry insider news for technology resellers
Story image

IBM urges robust AI governance as agents reshape banking sector

Today

IBM has published a whitepaper addressing both the transformative potential and the associated risks of autonomous AI agents in the financial services sector.

The whitepaper, titled "Agentic AI in Financial Services: Opportunities, Risks, and Responsible Implementation", describes what it calls an AI super cycle currently driving technological advancement and investment across the global economy. This heightened pace of change, according to IBM, is fuelling business transformation initiatives aimed at improving growth and operational efficiency.

The document details how financial services organisations could benefit from the use of AI agents. These are described as sophisticated software entities capable of independently assessing situations, gathering and processing data, problem-solving, executing tasks, and adapting their actions based on learning from real-world interactions—all with minimal human involvement. Such capabilities are expected to eliminate traditional friction points in operations that required multiple human interventions, creating smoother experiences for customers.

Richie Paul, Generative AI and Strategy and Transformation Lead, commented on the current trends in Australia, stating: "Australia's financial institutions are increasingly addressing demand for agentic AI as they evolve beyond automation toward systems capable of goal setting, decision-making, and real-time learning. Yet, the transformative potential of AI will only be fully realised when organisations can confidently delegate both routine and complex tasks to AI systems, freeing human talent to focus on strategic and higher value activities. This delegation capability represents the critical inflection point for AI value creation."

The whitepaper explores the unique risks presented by autonomous AI systems. Their self-directed nature, IBM notes, can exacerbate existing challenges with AI implementation and introduce new complexities. The company emphasises a holistic approach to building trust in such systems, incorporating organisational culture, governance protocols, tools, and comprehensive AI engineering frameworks.

Michal Chorev, IBM Consulting AI Governance Lead, said: "Building trust in AI agents is non-negotiable. This necessitates implementing organisational and technical guardrails across diverse use cases and deploying real-time monitoring systems to ensure AI actions remain safe, reliable, and aligned with organisational objectives."

Chorev further pointed out the need for ongoing development of governance: "Current AI governance frameworks must evolve to address the amplified risk associated with agentic AI. Critically, those leaders accountable for AI outcomes need both the authority and resources to effectively perform their role."

The paper advocates for a "compliance by design" strategy, urging organisations to develop and integrate risk mitigation measures alongside the design and deployment of AI systems rather than as afterthoughts. This approach is said to align technological advancement with the organisation's risk tolerance from the outset, allowing for better validation of use cases prior to significant investment.

Joe Royle, IBM Consulting AI Strategy Lead, commented on this proactive stance: "Our financial services clients are actively working to maximise returns on their AI investments and partnerships. As they innovate at accelerated speed to transform both customer and employee experiences, establishing effective governance and controls becomes increasingly vital to mitigate associated risks and support successful transformation."

The report also illuminates several strategic considerations for financial institutions, including the need to shift towards adaptive technology services—where AI agents move organisations from reactive solutions to systems that can personalise and anticipate customer needs. IBM urges a phased and measured adoption of agentic AI, highlighting the importance of risk assessment, robust governance, workforce development, and continuous system oversight.

Effective management of agentic AI, according to the research, requires coordinated efforts across organisational units, supported by transparent governance and open communication lines. Ensuring the comprehension and management of new risks is also highlighted as crucial, given that the deployment of agentic AI represents a significant departure from previous technological paradigms.

The whitepaper further stresses the importance of integrating compliance considerations early in the process, validating AI use cases against organisational risk appetite, and rolling out comprehensive literacy programmes. These educational efforts should extend beyond technical skills to incorporate ethical, philosophical, and social perspectives, enabling organisations to responsibly design and manage AI systems and mitigate potential biases.

David Ellis, IBM Consulting Managing Partner, summarised the findings: "Agentic AI has emerged as a core driver of innovation and banking transformation. While presenting exciting opportunities for the financial services sector, it also introduces unique challenges that must be addressed proactively. Through strategic planning, robust risk management frameworks, clear control mechanisms, effective supervision and unwavering commitment to responsible AI practices, financial institutions can confidently and safely navigate this new AI frontier," he said.

Follow us on:
Follow us on LinkedIn Follow us on X
Share on:
Share on LinkedIn Share on X