In March of this year, the UK government announced a strong artificial intelligence (AI) agenda by launching a UK Cybersecurity Council and revealing its intention to release a National Artificial Intelligence Strategy (the UK Strategy).
Details of the UK strategy will be released later this year, but at this point we understand that it will focus in particular on promoting the growth of the economy through widespread use of AI with, at the same time, time, a focus on ethics, safety, and reliable development of AI, including through the development of a legislative framework for AI that will promote public trust and a level playing field.
Shortly after the UK government’s announcement, the European Commission released a proposal for a European legislative framework on AI (the EU Regulation) which is part of the Commission’s overall ‘AI package’. The EU regulation aims to ensure the safety of people and the protection of fundamental human rights, and classifies AI as unacceptable, high or low risk use cases.
About the authors
This article is written by Mike Pierides, partner of Morgan Lewis, and Charlotte Roxon, partner
The EU regulation proposes to protect users “where the risks posed by AI systems are particularly high”. The definition and categories of high-risk AI use cases are broad and encompass many, if not most, use cases that involve individuals, including the use of AI in the context of biometric identification and categorization of natural persons, management of critical infrastructure and employment. and worker management.
Much of the EU regulation aims to impose prescribed obligations regarding these high-risk use cases, including obligations to undertake relevant ‘risk assessments’, to put in place mitigation systems such as as human monitoring and to provide transparent information to users. We anticipate that in addition to driving AI policies among AI providers and users, many of these obligations will be reflected by customers in their contracts with AI providers.
The European Union has banned cases of AI use that it considers an “unacceptable” threat to human security, livelihoods and rights. These cases include the use of real-time remote biometric identification systems for law enforcement purposes in publicly accessible spaces (unless otherwise permitted by law) and the use of systems that deploy techniques subliminal to distort a person’s behavior or exploit the “vulnerabilities” of individuals. , in such a way as to cause or be likely to cause physical or psychological harm.
The EU regulation also defined ‘low risk’ AI use cases (for example, use in spam filters) in which no specific obligations are imposed, although the Low-risk AI providers are encouraged to adhere to an AI code of conduct to ensure that their AI systems are trustworthy.
Failure to comply could result in hefty GDPR-like fines for businesses and suppliers, with proposed fines of up to € 30 million or 6% of global revenue.
The EU regulation has extraterritorial application, which means that AI providers who make their systems available in the European Union or whose systems affect people in the European Union or have production in the Union European Union, regardless of their country of establishment, will be required to comply with the new EU regulation.
UK strategy: legislative framework
From a legislative perspective, the UK’s starting point on AI law is similar to that of the European Union, as the protection of personal data is primarily governed by legislation in the GDPR. and the emphasis this legislation places on prioritizing the rights of individuals. After Brexit, the UK is showing signs of a willingness to deviate from the ‘European approach’ enshrined in the GDPR, as digital secretary Oliver Dowden announced in early March, although details of a such divergence remains unclear.
This could indicate that the UK legislative framework for AI will consciously deviate from the proposed EU regulation, possibly to be less prescriptive when it comes to the obligations imposed on providers and users than the European Commission. called AI a “high-risk” use case. However, at the moment, this is just guesswork. One of the challenges the UK will face, as it does with the GDPR, is the extraterritorial impact of the EU regulation and the need to ensure that data flows between the EU and the UK United remain relatively unaffected by Brexit.
In the UK, the government has started working with AI providers and consumers on the AI Council’s roadmap, which will continue throughout the year to develop the UK strategy.
In the European Union, the European Parliament and EU member states will need to adopt the European Commission’s AI proposals for the EU regulation to enter into force.
With the substantive details of the UK strategy still unknown, market participants will be watching closely to what extent the UK’s new strategy aligns with the legislative framework proposed by the European Commission.