Skip to content

You can’t look far without seeing promises of artificial intelligence being integrated in existing products or services, or the allure of efficiency and automation. Some might just be rebadging simple algorithms with a new name to ride the trend, while others show real promise. Your organisation may already have a position on the use of artificial intelligence internally. But what about your vendors?

In this blog we will cover:

  • Some of the risks of using artificial intelligence
  • How those risks translate into your extended enterprise
  • What you can do about it

Subscribe to our Knowledge Hub to make sure you catch the rest of our Vendor Risk Management blog series:

Subscribe now

The risks of using AI

The risks of using AI are many and varied, and will depend on the type of AI and how it is used. Let’s focus on three main types, before we explore how they may also apply to your vendors.

Information security

The biggest concern for many is information security, over both personal information and confidential commercial data. AI models need to be fed data in order to do their thing – and in the fine print, that data might then be used to train the model. Samsung provide a real case study of three instances of confidential information sharing with ChatGPT – providing confidential source code to identify errors, requesting the optimisation of source code, and sharing recordings of a confidential meeting to obtain a summary.

The fear is that once it is trained on that data, the right prompt or interface will be able to uncover that sensitive information. While the jury is still out on how practical it will be to effectively uncover that data, once the data has been handed over, you can’t take it back.

AI bias

For some AI implementations, data leaks may be less relevant – they might be developed in-house and remain sufficiently segregated. However, bias is another concern in almost any AI application. If AI is trained on data that has inherent bias, that may become ‘baked in’ to the outputs of the model. While discriminating based on race, gender and other factors may be prohibited, these characteristics may still be inferred from the provided data – especially if that bias already existed. As an example, Amazon attempted to implement AI to streamline recruitment, resulting in a bias against women[1]. A more recent study on generative large language models indicates that different models have varying political leanings[2].

You can read more about the unintended consequences of bias and algorithms in our IT Risk Management eBook.

Security threats to the AI model

In contrast to inadvertently sharing confidential data with an external AI, there are a range of threats to the security of the AI models themselves. Cyber attackers may either gain access to the model, or otherwise be able to influence the outputs. One specific example is data poisoning, where the training data is manipulated in order to influence the models output.

The use of APIs also opens new doors for attackers. This may allow attackers to invisibly modify prompts or capture the prompts and outputs that an individual is using.

The threat landscape is always evolving, and these are just two of the types of attacks that Google recently categorised[3]. You hope that the large-scale AI models you are using are aware of and addressing these types of threats, but if you develop your own internal AI models you also need to address these threats.

The use of AI in your extended enterprise

Some of the above may already be lurking as AI risks in your vendor ecosystem. Let’s paint a picture.

Imagine you operate a financial services business. You outsource your contact centre operations to an overseas vendor, who manages most customer interactions on your behalf. An entrepreneurial team leader wants to improve the quality and efficiency of their written customer interactions. Taking initiative, they start feeding written interactions – including customer personal information - to a generative AI.

Perhaps they start developing their own AI tools, building upon some open-source AI projects. The security of the projects that they’ve used might be particularly low, exposing the entire data set[4].

How confident are you that scenarios like these aren’t happening in your extended enterprise? What gives you that confidence?

These scenario highlights a challenge with managing data leaks to external AIs. Many of these AIs can be used or accessed by individuals in an organisation without needing to go through a vendor or supplier assessment process. Many generative AI tools can fly under the radar as ‘shadow IT’, whether in your own organisation, or your vendors’.

What you can do about it

Of course, there are many benefits from using AI. The potential rewards and risks need to be weighed, and that extends to your vendors.

Depending on your assessment, you might already have a position on the use of artificial intelligence in your own organisation – either how to use them responsibly or banning them outright. Research from Blackberry indicates that 75% of organisations are looking to ban generative AI tools, with data security and privacy the biggest concern[5]. Given the proliferation of AI tools and the advantages that they can provide, this may be a challenge to maintain over the long term.

Here are some key considerations in governing AI risk in your organisation:

AI policy

If it isn’t already in place, establish your own organisation's policy on the responsible use of AI, and the risks you are willing to accept. Some key considerations:

  • Whether you ban some types of AI altogether. If you do, develop a clear plan for how this will be practically communicated, monitored and controlled
  • Develop an approval process for the use of specific AI tools
  • Develop guidelines for responsible use, which may include distinctions between the use of generative AI and other types of AI, and those that are developed and managed internally

Integrated vendor risk management

Integrate AI-related risk assessments and due diligence questionnaires into your vendor management processes, tailored based on the data shared with the vendor. This might include:

  • The vendors internal policy on the use of AI tools
  • Governance arrangements over their own internally developed models
  • Assessing risks posed by their existing approaches to AI
  • Monitoring and review of the vendor to assess any changes in their use of AI (and related risks) over time

Legal advice

Finally, consider working with in-house or external legal teams to establish standard contract clauses to protect your data from being used.

 

To find out more about the risks that AI and algorithms pose to your business, including our governance checklist on algorithms, download our IT Risk Management eBook. You may also want to download our Vendor Risk Management eBook for a detailed step-by-step guide of to build an effective vendor risk management program.

Subscribe to our Knowledge Hub to make sure you catch the rest of our Vendor Risk Management blog series:

Subscribe now

 

[1] Reuters

[2] Technology Review

[3] Dark Reading

[4] Dark Reading

[5] Blackberry

About the author

Michael is passionate about the field of risk management and related disciplines, with a focus on helping organisations succeed using a ‘decisions eyes wide open’ approach. His experience includes managing risk functions, assurance programs, policy management, corporate insurance, and compliance. He is a Certified Practicing Risk Manager whose curiosity drives his approach to challenge the status quo and look for innovative solutions.