Regulatory

Global: Australian Regulator Flags Governance Gaps as AI Risks Rise in Financial Sector

0
Australian Regulator Flags Governance Gaps as AI Risks Rise in Financial Sector

The Australian Prudential Regulation Authority has warned that financial institutions must significantly strengthen their approach to managing artificial intelligence risks, as rapid adoption outpaces existing governance and control frameworks.

In a communication to banks, insurers, and superannuation trustees, APRA highlighted growing concerns that current practices around governance, risk management, assurance, and operational resilience are not evolving quickly enough to match the scale and complexity of AI deployment.

The warning follows a targeted supervisory review conducted last year, which assessed how institutions are implementing and overseeing AI systems. The findings indicate that while AI adoption is accelerating, it is simultaneously introducing new financial, operational, and cybersecurity vulnerabilities that organisations are not fully equipped to manage.

APRA noted that existing information security frameworks are struggling to keep pace, particularly as more advanced systems are introduced. The regulator pointed to emerging frontier models, including Claude Mythos by Anthropic, warning that such technologies could amplify cyber risks by increasing the speed, scale, and sophistication of attacks.

A key concern identified in the review is the gap in board-level oversight. While many boards recognise AI’s strategic importance, APRA found that technical understanding remains limited, reducing their ability to effectively challenge management decisions and oversee associated risks.

The regulator also highlighted concentration risks, with some firms heavily reliant on a single technology provider for multiple AI use cases, alongside inadequate contingency planning. Additionally, AI capabilities are increasingly embedded within broader software systems, limiting transparency around how models are trained, updated, and governed.

These complexities, APRA said, are compounded by fragmented assurance and change management processes, which may not provide sufficient oversight for AI-driven operations.

Commenting on the findings, Therese McCarthy Hockey emphasised the urgency of addressing these gaps.

“We cannot be blind to the risks of such powerful technology—whether in our own hands or those with malicious intent,” she said. “While AI adoption is accelerating, the systems required to safely govern its use are not keeping pace. The ability to detect and respond to vulnerabilities must evolve just as rapidly.”

Although APRA is not introducing new regulatory requirements at this stage, it expects financial institutions to take proactive steps to close the gap between AI capabilities and their ability to monitor, control, and secure these systems.

The regulator’s message is clear: as AI becomes more deeply embedded in financial services, robust governance and risk management will be critical to maintaining system stability and trust.

Nigeria Faces Surge in Cyber Threats as Digital Economy Expands — Olatunji

Previous article

You may also like

Comments

Comments are closed.

More in Regulatory