The UK’s White Paper: A pro-innovation approach to AI regulation – a summary of the UK government’s proposals and a range of stakeholder responses.
A consultation was conducted between March and June 2023 by the UK’s Department for Science, Innovation and Technology and the Office for Artificial Intelligence on their white paper ‘A pro-innovation approach to AI regulation’ (White Paper).
As the responses to the White Paper will help shape the government’s approach to the regulation of AI, B P Collins’ corporate and commercial team summarises both the key proposals from the government as well as some of the published responses from stakeholders.
It also briefly compares the UK’s approach to AI to developments taking place in the EU and US.
The White Paper – a summary
The White Paper explains that AI is currently regulated by applying existing legal frameworks and there is no AI-specific legislation. This can make it difficult to be compliant when using or developing AI as there could be many differing laws and regulations that may be relevant to consider. For example, areas ranging from data protection and consumer protection law to employment and equality laws could all apply to AI.
With this in mind, the White Paper sets out three key objectives for the UK’s AI regulatory framework, which the government hopes will ‘provide more clarity and encourage collaboration between government, regulators and industry to unlock innovation.‘ They are to:
- a. Drive growth and prosperity;
- b. Increase public trust in AI; and
- c. Strengthen the UK’s position as a global leader in AI.
The government believes that the way to achieve these objectives is to have a flexible, principle-based framework. The White Paper states that this approach would ‘better strike the balance between providing clarity, building trust and enabling experimentation’.
The key principles at the heart of the proposed framework are:
- a. Safety, security and robustness;
- b. Appropriate transparency and explainability;
- c. Fairness;
- d. Accountability and governance; and
- e. Contestability and redress.
Rather than creating a new cross-sector AI regulator that would work on delivering these principles, the government aims to use existing regulators that will implement the regulatory framework using these principles. The framework would initially be on a non-statutory basis under regulators’ current responsibilities and mandates.
To implement the framework, the government expects regulators to prepare guidance and work with other regulators to publish joint guidance so that businesses have clarity on the AI principles. The White Paper highlights that some regulators may lack AI expertise and/or may not have organisational capacity so offers options as to how these gaps could be met, such as by having a common pool of expertise or supporting existing regulator initiatives.
The government would monitor the effectiveness of this non-statutory framework and consider whether it would be necessary to introduce a new statutory duty to have regard to the key principles of the AI framework and/or if other broader legislative changes would be required.
The White Paper outlines the central functions that will need to be delivered, which include: carrying out cross-sectoral risk assessments, supporting coherent implementation of the principles and ensuring that the UK framework works in conjunction with international regulatory frameworks.
Additionally, the White Paper discusses the Digital Regulation Cooperation Forum (DRCF), which is a cooperation initiative between certain regulators including the CMA, ICO, Office of Communications (Ofcom) and the Financial Conduct Authority (FCA). In fact, the DRCF has already published cross-regulatory guidance related to AI. The White Paper comments that whilst the DRCF could play a role in supporting the central functions, the functions will initially be carried out by government and it will subsequently be reviewed if an independent body would be better placed to take over the central functions.
In addition to guidance produced by regulators, the government hopes to establish an AI regulatory sandbox. It expects that this would be similar to the current FCA regulatory sandbox – this tool allows businesses to trial innovative products and services with consumers and cooperate with the FCA as it does so. The White Paper explains that the initial pilot sandbox would likely involve multiple regulators but focused on a single industry sector. Over time, the intention is to develop sandboxes in additional sectors.
Another element of the proposed framework is to develop AI assurance techniques and technical standards. The White Paper referred to the UK AI Standards Hub, which seeks to promote engagement with technical standards and proposed a layered approach. The first layer would provide consistency and a common foundation. The second layer would be for additional standards to cover specific issues. The third layer would be to encourage sector-specific standards.
Response from ICO
The ICO in its response, referred to its previous and ongoing work in AI and encouraged the government to deliver some of the framework through the DRCF.The ICO understood that that it is up to regulators to produce advice and guidance under the proposed AI regulatory framework. In order to ensure alignment with other regulators, the ICO stated that it will look forward to receiving clarification on the role that government and regulators will have.
A key concern for the ICO was to ensure that the AI principles should be interpreted in a compatible way with data protection principles and the ICO’s response offered a view on how this could be achieved.
Before regulators issue guidance or establish joint sandboxes, the ICO suggested that research should be carried out into ascertaining what types of guidance and services would be most valuable for businesses.
The ICO also raised the costs implication of the White Paper’s proposals as additional funding would be required for regulators to deliver the framework.
Response from the Competition and Markets Authority
In the CMA’s response, it stated that it was supportive of the government’s approach and initially following a non-statutory principles-based approach. Similar to the ICO, the CMA was now looking forward to the government’s guidance as to how the principles should be applied and suggested that the DRCF could coordinate a coherent cross-regulatory approach.
Response from the Law Society
The Law Society’s response was highly detailed and contained many recommendations. A key comment was that it believed that in addition to the general principles, the government should also look to introduce legislation. Legislation should be introduced ‘particularly where AI presents high risks, or possesses potentially dangerous capabilities and extreme risks, or has a high likelihood of significant harm’.
Some of the other recommendations included:
- The need for greater clarity where there are discrepancies between different sectors and regulators;
- Making it a priority to ensure alignment with EU regulation;
- Advocating for the establishment of an AI officer role – this would be for large entities, entities operating in high-risk areas or entities developing an AI system that could have dangerous capabilities;
- Recognising and harnessing the legal profession’s expertise in the approach to AI regulation;
- Communicating a clear position on AI and intellectual property;
- Delivering support specifically targeted towards SMEs; and
- Imposing mandatory transparency requirements when AI is used in public services.
Response from the Institution of Engineering and Technology
The IET in its response stated that it would be necessary to impose a statutory duty on regulators to implement the framework. This would be to ensure that the AI principles would be taken seriously and engaged with.
The IET also called for an oversight body to be set up, that would coordinate regulatory guidance, as various regulators in different industries could produce contradictory guidance. Also, there should be a repository for knowledge such as AI resources, tools and guidelines so that businesses could access and learn from these hubs. The IET recommended both a central repository established by the government as well as local and regional repositories.
Another key recommendation from the IET was to fund courses for AI upskilling, particularly agile short-courses. This would allow employees to take advantage of the benefits AI could offer.
Response from Professional Standards Authority
A key concern raised by the PSA in its response was that there could be a policy gap. This is because the proposed AI framework focuses on the use of AI and not AI technology itself.
Additionally, the PSA, as a regulator in the health sector, raised that the health sector has many different regulatory bodies with different duties and roles so it could be a challenge to have a coherent approach to AI. It was not clear which regulators were being referred to in the White Paper and would be expected to develop guidance on the AI principles.
The PSA also echoed the comments in the White Paper that there was likely to be a skills gap amongst regulators. In particular, the PSA stated that ‘our sector will need to develop skills, expertise, and possibly systems, that it currently does not have’.
The EU and US
There have also been recently developments in the EU and US in response to the emergence of AI.
Ahead of the recent AI Safety Summit, the White House issued a landmark executive order on AI. The order called for the development of new standards in relation to AI safety and security, including a requirement for developers of the most powerful AI systems to share information with the US government. The order also called for privacy legislation and further guidance in a range of areas such as justice, housing, healthcare and the US government’s own use of AI.
The EU is developing legislation (EU AI Act) and reached a provisional agreement on the EU AI Act in December last year. It is anticipated that the EU AI Act will take a risk-based approach to AI with more prescriptive rules compared to the UK or US. Depending on the level of risk of an AI system, there will be an appropriate level of intervention. These interventions could range from transparency requirements to outright prohibited practices.
For further information and advice on this fast-moving area and how it might affect your business, please contact B P Collins’ corporate and commercial team at email@example.com or call 01753 889995.