As world leaders have been gathering in Bletchley Park for the world’s first AI Safety Summit, which focused on the safety risks posed by Artificial Intelligence (AI), the need to engage with AI responsibly has never been clearer. 

In recent years, AI has been used in chatbots and algorithms that provide personalised shopping recommendations. AI has now gone further with sophisticated tools such as OpenAI’s ChatGPT and Dall-E, Microsoft’s Bing Chat Enterprise and Google’s Bard becoming widely available to individuals and businesses.

Some of the potential benefits of incorporating AI into businesses include the potential to increase innovative thinking, brainstorming, gains in efficiency and automation of routine matters.

However, it is important to appreciate that the use and development of AI can lead to a variety of risks, including the following:

  • AI is not always correct. This can be dangerous as it can produce an output that sounds entirely plausible yet is inaccurate, misleading or in some cases entirely fabricated. This risk was highlighted by the now infamous case of a US law firm being fined after six cases cited in a court filing turned out to have been completely invented by ChatGPT. This highlights why AI should never be relied upon as a substitute for medical, financial or legal advice from a professional.
  • AI heavily relies on the quality of input data provided. This relates to the concept known as ‘garbage in garbage out’. For example:
    • If the input data is biased, then the output given by AI could be discriminatory – using such AI could contravene the Equality Act 2010.
    • If using ChatGPT 3, it is only trained on data up to September 2021 so will not be able to provide factual information that occurred after that date.
  • In the context of personal data, UK GDPR (and to a certain extent EU GDPR) may apply. Fines for breaches of UK GDPR can be up to £17.5m or 4% of annual global turnover (whichever is higher).
  • Data leaks – certain AI tools will use information provided by users to train and develop itself further. There have been cases where sensitive and confidential information has been unintentionally shared with third parties.
  • Additionally, it is not clear how intellectual property rules such as copyright apply to content produced by AI. 

One way of engaging with AI responsibly and mitigating the risks is to create and implement an AI policy. Suggested questions to consider when preparing an AI policy include:

  • For what purpose is AI being used for? How could it be used? Why is AI being used as opposed to other alternatives?
    • Before AI is used for other purposes, should prior approval be obtained? If so, who should be responsible for giving this approval? 
  • What specific AI tool will be used?
    • Who developed that tool?
    • What are the terms of use for that AI tool?
  • If developing AI in-house, how are the following issues being approached:
    • explainability of decisions made and outcomes produced by AI;
    • what training data (input data) is used; and
    • how is the AI tool going to be refined over time?
  • Who in the organisation is using AI?
    • As different teams in an organisation are likely to use AI for varied purposes, who would be the appropriate people with oversight?
    • Does the use of AI need to be recorded? If so, who will be responsible for keeping the records up to date?
  • What will output produced by AI be used for and who will receive this? Will output be used internally and/or externally?
    • If AI output is being used internally, which internal stakeholders need to be aware of this?
    • If AI output is being used externally, how will this be made known to customers, suppliers and so on? Has legal advice been obtained as to whether output data can be used commercially?
  • What are the specific risks that could flow from the particular use / proposed use of AI?
    • Can any of these risks be mitigated?
    • Which individuals need to have training in the use of AI? What should the training consist of?
    • Are there certain tools and purposes that cannot be adequately mitigated and so should be blacklisted entirely?
  • Who will have ongoing responsibility for AI strategy and governance and how will AI outcomes be audited? Is it appropriate to nominate an AI committee or officer?
  • How will complaints relating to the use of AI be dealt with?

Whilst the above suggested questions can provide a starting point for the matters to be considered when preparing an AI policy, this is by no means an exhaustive list. AI is a very fast-moving area and an organisation’s use of AI may also change over time. Therefore, it is critical to keep an AI policy under regular review.

For further advice please contact B P Collins’ corporate and commercial team at enquiries@bpcollins.co.uk or call 01753 889995.


Related Services

Related Team Specialists

Speak to an expert

Or send us an email