Skip to content

AI is moving faster than governance. APRA and ASIC agree.

Artificial intelligence is no longer a future-state risk. It is already reshaping how organisations operate, compete, serve customers and defend themselves.

Within the same week, APRA and ASIC each released guidance on AI. Their focus differs, but the message is consistent: AI is moving faster than many governance, risk, assurance and resilience practices can support.

APRA is focused on AI adoption. Are organisations using AI in a controlled, accountable and resilient way?

ASIC is focused on AI-enabled external threats. Are organisations ready for attackers that can move faster, scale faster and exploit weaknesses faster?

Together, the two regulators are describing the same risk coin. One side is how organisations use AI. The other is how AI changes the threat environment around them. Both demand stronger governance, clearer accountability and evidence that controls are working.

For a practical guide to governing AI risk, download Protecht’s Managing AI Risks eBook:  

managing_ai_risks_ebook_cta_1200x400

APRA’s focus: adopt AI, but do it with discipline       

APRA recognises the opportunity for AI to improve productivity, efficiency and customer experience. It may also become a source of strategic disadvantage for organisations that fail to embrace it.1

But APRA’s concern is that adoption is moving faster than governance. It observed that many boards may be promoting AI adoption strategies while falling short on the technical literacy needed to challenge AI-related risks effectively. Supplier concentration is also a key issue. AI vendors may support critical operations with limited business continuity or exit strategies in place – key requirements under CPS 230. Notably, APRA observed that risk and audit teams may not have sufficient expertise to provide assurance over AI.

APRA’s expectations point to a more mature AI governance model. Boards need sufficient knowledge to provide effective oversight. AI implementation should align with risk appetite and tolerance settings. Monitoring and reporting should include third-party dependencies and clear triggers linked to resilience objectives. Risk and audit teams need to be upskilled.

AI governance should not be a policy document sitting on the shelf. It needs to show up in practical management disciplines:

  • AI use case inventories
  • Lifecycle ownership from design to decommissioning
  • Clear accountability for high-risk decisions
  • Model monitoring and post-deployment review
  • Supplier and fourth-party dependency mapping
  • Continuous assurance over model behaviour, bias, drift, control failure and customer impact.

APRA’s message is not “don’t use AI.” It is “use AI with eyes open, controls working and accountabilities clear.”

ASIC’s focus: the threat environment has changed    

ASIC’s letter is more urgent in tone. It warns that frontier AI models (both regulators mention Anthropic’s Mythos model) are accelerating cyber capability, increasing the speed and scale of attacks, and enabling forms of exploitation that were previously out of reach for many actors.2

ASIC’s call to action is not to chase the latest shiny cyber tool. It is to return to first principles: identify critical assets, strengthen core controls, reduce attack surfaces, review access privileges, patch promptly, maintain incident response plans, manage third-party risks and use AI defensively where appropriate.

That is a very important message. The AI-enabled threat environment does not make the basics obsolete. It makes the basics more important.

A weak access control, an unpatched vulnerability, an exposed service, an untested incident response plan or an unclear third-party dependency can become much more dangerous when attackers can use AI to discover, chain and exploit weaknesses at speed.

ASIC also places strong emphasis on governance and accountability. Boards and senior executives should understand their organisation’s cyber resilience position, ask the right questions and be able to evidence the basis for their assurance. ASIC specifically calls out the need for meaningful reporting on end-to-end control effectiveness, not just activity.

This is where cyber resilience becomes an enterprise risk issue. A dashboard showing patching activity, phishing training completion or number of vulnerabilities closed is not enough. Boards need to know whether critical controls are designed effectively, operating effectively and resilient under pressure.

The real convergence: assurance must catch up       

The strongest common theme across both regulators is assurance.

APRA says assurance practices are not keeping pace with AI adoption, and that point-in-time, sample-based assurance is poorly suited to probabilistic models that learn, adapt or degrade over time. It expects integrated assurance across cyber security, data governance, model performance risk, operational resilience, privacy and conduct risks.

ASIC says governance should not rely only on assurances. It should be supported by evidence: test results, audit findings, lessons from incidents and independent validation.

AI risk management cannot be based on confidence, vendor claims or policy intent. It needs evidence.

For risk teams, this creates a practical challenge. Many organisations already have frameworks, policies, controls, risk registers, cyber standards, third-party processes and incident playbooks. But these often sit in different systems, teams and reporting lines, while AI cuts across all of them.

A customer-facing AI use case may involve technology risk, model risk, privacy risk, conduct risk, cyber risk, outsourcing risk, operational resilience and compliance obligations. If those disciplines are managed separately, no one may have a complete view of the risk.

The future state is not more siloed governance. It is connected governance.

What boards should be asking now            

The combined APRA and ASIC message gives boards and executives a practical set of questions:

  • Where are we using AI today?
  • Which AI use cases matter most?
  • What is our risk appetite for AI?
  • Can we evidence control effectiveness?
  • Are our cyber controls ready for AI-enabled threats?
  • Do we understand our AI supply chain?
  • Are continuity arrangements credible?

Answers to these questions may be uncomfortable but highlight gaps that need to be closed.

What this means for risk and compliance teams          

For risk and compliance professionals, the regulatory message is a call to move from awareness to operationalisation. Organisations must move to embed AI into the everyday machinery of risk management:

  • Risk and control self-assessments
  • Control testing and assurance

  • Third-party risk assessments

  • Cyber resilience reviews

  • Incident and breach management

  • Business continuity planning

  • Compliance obligations

  • Operational resilience scenarios

  • Board and executive reporting.

This is where GRC platforms have an important role to play. They should help organisations connect AI use cases to risks, controls, obligations, third parties, incidents, assurance findings and resilience impacts – creating a living view of AI risk across the enterprise.

How Protecht helps    

APRA and ASIC are not asking organisations to panic. They are asking them to be disciplined.

That discipline needs to apply to both sides of the AI risk coin: how AI is adopted inside the organisation, and how the organisation responds to AI-enabled threats outside it. In both cases, confidence depends on connected governance, clear accountability and evidence that controls are working.

Protecht helps organisations move from fragmented oversight to integrated risk management. It brings risks, controls, obligations, third parties, incidents, assurance findings and resilience impacts into a single, connected platform, giving boards, executives and risk teams a clearer view of what matters and what needs action.

Protecht supports AI risk management through:

  • AI governance to record AI inventories, use cases, ownership, lifecycle stages and risk assessments
  • Cyber risk management to strengthen oversight of controls, frameworks, testing, issues and assurance
  • Operational resilience to map critical operations, suppliers, business continuity arrangements and resilience impacts
  • Vendor risk management to understand supplier and fourth-party dependencies linked to AI and critical services
  • Controls management to test, evidence and report on control effectiveness.

  • Protecht Academy to address AI governance and risk capability gaps through practical education.

The organisations that succeed with AI will not be those that slow everything down in the name of control. They will be those that build confidence through governance, clarity through accountability and resilience through evidence.

Request a demo to see how Protecht helps you connect AI risk, cyber risk, operational resilience, third-party risk and assurance in one scalable platform:

blog-demo-cta_1200x400

 

About the author

Michael is Protecht's Head of Risk Research and Knowledge. He is passionate about the field of risk management and related disciplines, with a focus on helping organisations succeed using a ‘decisions eyes wide open’ approach. His experience includes managing risk functions, assurance programs, policy management, corporate insurance, and compliance. He is a Certified Practicing Risk Manager whose curiosity drives his approach to challenge the status quo and look for innovative solutions.