Skip to content

U.S. Treasury on AI risk management: A good set of recommendations.

The U.S. Department of Treasury released a report on March 27, 2024 on managing artificial intelligence (AI)-related risk in the financial services sector, prompted by an Executive Order by President Biden. The outcomes of the report, informed by interviews with industry stakeholders in financial services, make up a comprehensive set of recommendations.

The good news is that you most likely have the enterprise risk management capabilities to meet them already.

In this blog, we will cover:

  • Observations about the AI risks that financial services organizations, banks and credit unions face
  • How interviewees for the report are responding to these risks
  • Leveraging existing enterprise risk management and model risk management capabilities

 

Protecht’s Information technology risk management eBook features a comprehensive section on AI and IT risk management. Download it for free:

Find out more

Using AI to monitor and manage cybersecurity and fraud risk.

The report is clear that it considers AI broadly, with generative AI as a subset. Most financial institutions are using – and have been for some time – AI tools as part of their cybersecurity or fraud programs. Of course, maturity varies across the sector, with ongoing uplift in capability.

Of note is that many institutions use AI models that are built by third parties – or even by fourth parties. For example, an organization might specialize in cybersecurity, but outsource the build of AI models. These tools may then be tuned with the bank’s in-house data before deployment.

A cautious approach is being used to incorporate generative AI into business operations. While the report doesn’t state cybersecurity or fraud specifically, it’s implied through commentary on limited adoption for activities that require high levels of assurance. This aligns with the Executive Order’s requirement to minimize risk in AI deployments.

Dealing with AI threats.

Proactive use of AI is one side of the coin; the third section turns to threats to the organization. It covers two quite different types of threats:

  • Threat actors leveraging AI to conduct cyberattacks or fraud
  • Threat actors attacking the organization’s AI systems

The first applies equally to all, while the latter scales with the organization’s internal adoption of AI.

No doubt you’ve been on the receiving end of countless phishing attempts. The use of AI is making these social engineering attempts harder to spot, and generative AI can help tailor messages to individual targets, making them more authentic while also allowing for scale.

The use of AI by threat actors is not a new risk in and of itself; it simply changes the way that existing cybersecurity, fraud or disruption risks can occur, and perhaps most importantly the speed at which they can escalate. In particular, the use of AI or automation may more quickly identify and exploit vulnerabilities.

Attacks on AI systems are more nuanced. If you (or your third parties) are implementing AI systems, you need assurance that they will achieve the expected outcomes and have a high level of integrity. While we cannot blindly trust technology, many end users of AI, whether specialized like tuned cyber threat tools, or generative AI models, will have no or limited knowledge of how the model achieves its results or outputs. Unless something is obviously off or they are trained to look for anomalies, they will likely trust the model.

Threat actors, including insiders, might modify the parameters of a model directly to manipulate how the model operates and the outcomes it produces to serve their own purposes. Another method of attack is data poisoning: modifying the data that the model is trained on. Depending on the intentions of the threat actors, this may result in AI that might compromise personal privacy or safety, or discreetly introduce interactions and outputs that might be harmful.

The use of third parties also comes with its own risks. Not just from malicious cyber threats that might impact their model, but how they might change their models over time. You may need additional assurance over how they govern their models.

Leveraging existing enterprise risk management capabilities.

The report next considers existing regulatory requirements that might cover the risks of AI. And while regulatory in nature for financial services, they are good practice for anyone:

  • Risk management
  • Model risk management
  • Technology risk management
  • Data management
  • Third-party risk management

While not specified, we interpret the first to be enterprise risk management, which ultimately includes the rest. Some risk types may require specific processes or requirements, but ultimately the goal is to manage risk to the enterprise.

To that end, organizations likely already have the processes required to manage these risks. This aligns with those interviewed for the report – they were embedding the management of AI risks into their ERM programs. Existing risk processes are sufficient – you just need to understand how existing risks are changing. Business lines are responsible for managing their risks and may require some education on AI and how threat actors can use and exploit them; however, existing approaches to risk mitigation and control management are the same.

At Protecht, we adopt a process we call Risk in Motion to help bring critical risk information to the surface. This brings together related risk processes and components, including risk assessments, attestations, key risk indicators, controls assurance, incidents, and action management.

Financial institutions will already perform model risk management, including model risk governance, risk management, and reporting. For any model, it's important to review data quality, how bias is monitored and managed, and the explainability of the model. While this approach is typically for financial models, those same requirements apply to any AI application, including those for cybersecurity, fraud, or integration with products and services. This includes regular testing and validation of the models.

Conclusions and next steps for your organization.

If you aren’t already adopting them, here are some actions to consider:

  • Deliver general awareness training on AI, which will improve existing risk assessment processes
  • Review the existing risks you face that may have change due to the evolving nature of AI
  • Integrate AI-related risks, and the controls to manage them into your existing enterprise risk management systems with commensurate controls assurance
  • Integrate your use of AI models into existing model risk management processes

 

Resources

If you want to know more about managing AI risks within an enterprise risk management framework, Protecht’s Information technology risk management eBook provides a comprehensive review on how AI risk fits into broader information technology risk management for risk and IT professionals alike, including a risk management checklist for specific AI projects. Find out more and download the free eBook now.

Find out more

 

For additional insights on the risks involved in AI, check out our recent webinar and follow-up Q&A on the topic: 

About the author

Michael is passionate about the field of risk management and related disciplines, with a focus on helping organisations succeed using a ‘decisions eyes wide open’ approach. His experience includes managing risk functions, assurance programs, policy management, corporate insurance, and compliance. He is a Certified Practicing Risk Manager whose curiosity drives his approach to challenge the status quo and look for innovative solutions.