Oct 2024

United Kingdom

Law Over Borders Comparative Guide:

Artificial Intelligence

Contributing Firm

Introduction

The UK is home to a large, established and rapidly growing AI economy, ranked third in the Global AI Readiness index. It is currently worth USD 21 billion (according to the U.S. International Trade Administration) and is expected to grow to over USD 1 trillion by 2035. Put into context, its AI workforce exceeds 50,000 people and it has twice the number of AI companies compared to any other European nation. It is perhaps unsurprising that the UK has chosen not to destabilise what is, by any account, a success story, by introducing AI related legislation. Artificial intelligence regulation will in fact prove to be a significant Brexit-related inflection point which will cause further divergence between the UK and EU economies. 

Top

1 . Constitutional law and fundamental human rights

Top

1.1. Domestic constitutional provisions

The UK’s constitution is not set out in a single written document, it is comprised in a series of statutes, judicial decisions, treaties and conventions. There are no aspects of the UK constitution directed specifically at AI.

Top

1.2. Human rights decisions and conventions

The most important elements of Human Rights law are:

  • the European Convention on Human Rights (ECHR);
  • the Human Rights Act 1998 (HRA), which enables people to bring cases in the UK courts to uphold their ECHR rights; and
  • the Equality Act 2010.

(See also Section 4, below.)

Respect for private and family life (Article 8 ECHR)

Automated facial recognition (AFR) is a controversial application of AI. The Court of Appeal case of R. (Bridges) v. Chief Constable of South Wales Police [2020] EWCA Civ 1058, is a key decision on the use of AFR and human rights. South Wales Police (SWP) used an AFR system to check CCTV footage of the public against “watchlists” of target individuals held in police databases. 

The court held that SWP had not established clear criteria for when to use AFR, and left too much discretion to individuals as to who was on the watchlist and where to deploy the system. However, the use was held to be a proportionate interference in accordance with Article 8(2), as the benefits to the community of using AFR outweighed the negative impact on individuals.

Prohibition of discrimination (Article 14 ECHR)

In the Bridges case, it was held that SWP had failed to comply with its Public Sector Equality Duty (Equality Act, section 149) to consider whether a policy could have a discriminatory impact. This was partly because SWP automatically deleted data of individuals whose images didn’t match those on the watchlist, hence there was no analysis of whether the AFR system was biased. Also, SWP did not check the database used to train the AFR, and so couldn’t identify or address any imbalance in the training data. 

Other practical uses of AI that could breach Article 14 rights include, for example, use of algorithmic software in sentencing and parole decisions, AI recruitment interviewing and CV assessment, and AI determination of applications for credit or insurance. (See also Sections 3 and 4 below.)

Other human rights

The potential ubiquity of AI means that other aspects of the ECHR may be breached by its use, including:

  • the right to liberty and security (Article 5);
  • the right to a fair trial (Article 6);
  • the right to freedom of expression (Article 10); and 
  • the right to freedom of thought, conscience and religion (Article 9). 

In light of such concerns, the UN Commissioner for Human Rights has called for a moratorium on the “sale and use of artificial intelligence systems that pose a serious risk to human rights” (see United Nations Human Rights Office of the High Commissioner, “Artificial intelligence risks to privacy demand urgent action, Bachelet“ (15 September 2021) at www.ohchr.org/en/2021/09/artificial-intelligence-risks-privacy-demand-urgent-action-bachelet).

Top

2 . Intellectual property

Top

2.1. Patents

Patents for AI

UK patent law is contained in the Patents Act 1977 and associated case law.

UK law excludes from patent protection both mathematical algorithms, and computer software “as such” (i.e., disembodied computer software). However, patents can protect AI systems integrated into software embodied in computing hardware, which together provide a tangible technical advancement. Examples include AI models, user interfaces, and ways of training AI systems resulting from technical improvements such as an increase in speed or accuracy, or improved extraction of features from images.

AI-generated inventions

Patents can be granted where AI tools assist the creation of an invention. However, UK law does not allow patents for inventions created solely by AI, on the grounds that a patentable invention must have a human inventor. This follows a case where patent applications stated that the inventor was an AI machine called DABUS, owned by the applicant (see Thaler v. Comptroller General of Patents Trade Marks and Designs [2021] EWCA Civ 1374 at www.bailii.org/ew/cases/EWCA/Civ/2021/1374.html).

The UK Intellectual Property Office (IPO) is considering whether patent law should be changed, either to create a new type of IP right for AI-generated inventions (possibly with a more limited scope and term of protection), or to allow patent protection for them, with authorship/ownership given to the human who made the arrangements necessary for the AI to devise the invention.

Top

2.2. Copyright

Copyright for AI

UK copyright law is mainly contained in the Copyright, Designs and Patents Act 1988 and associated case law. Copyright subsists in the software in which an AI system is embodied, protecting the particular expression of the AI system which is embodied in the software in question, but not the underlying ideas and principles. Copyright may also protect the databases used to train and test AI systems, if there is sufficient skill in selection and curation of data, such that it amounts to the intellectual creation of the author.

Copyright for AI-generated works

According to the Copyright, Designs and Patents Act 1988, copyright applies to computer-generated works which will include AI systems. The author (hence the first owner of the copyright) is deemed to be “the person by whom the arrangements necessary for the creation of the work are undertaken”. The law is uncertain as to what counts as making the necessary arrangements, but this might include conceiving of the project, creating algorithms, or selecting data used to train the AI. This copyright lasts for 50 years.

This provision applies only in a situation where there has been no human authorial input at all (with no copyright protection for a work made jointly by a human and an AI). However, if an AI system is simply used as a tool by a human author to create a work, copyright protection will apply in the usual way, with the human being as author.

Text and data mining exception

UK copyright law has an exception which allows a user to make a copy of a protected work in order to conduct computational modelling of the information in it, provided that it is for the sole purpose of research for non-commercial purposes. This can be useful where a dataset used for training a machine learning (ML) system is subject to copyright. The exception applies only if the user already has lawful access to the dataset (such as under a subscription). 

Database right

Datasets used for AI training may also be protected by database rights under the Copyright and Rights in Databases Regulations 1997, provided there has been substantial investment (whether financial, human or technical) in obtaining, verifying or presenting the database contents. 

Database rights belong to the person who takes the initiative in, and assumes the risk of investing in, the obtaining, verifying or presenting of the database contents. As there is no originality requirement, database rights can apply to databases generated by AI systems.

Top

2.3. Trade secrets and confidentiality

The content and functionality of many AI systems, and the training datasets, can be protected by both the laws of confidentiality and trade secrets. In some situations, this is the only type of IP protection available for data. The common law of confidentiality protects information about AI systems and datasets against unauthorised use, provided that it is confidential, that it was obtained in circumstances such that a duty of confidentiality applied, and that there is actual or likely unauthorised use which is detrimental to the owner.

The UK’s trade secrets regime (derived from EU law) is partly contained in the Trade Secrets (Enforcement, etc.) Regulations 2018. It applies to information which is secret (not generally known), has commercial value because it is secret, and has been subject to reasonable steps to keep it secret. 

Top

2.4. Notable cases

The case of Getty Images v. Stability AI is currently pending trial before the High Court in London. Stability AI is an AI provider based in London and the proprietor of major image-generating LLM, Stable Diffusion. The case revolves around Getty Images’ claim that Stability AI allegedly used Getty Images-owned image data from its image library as training data for Stable Diffusion, in the absence of legitimate consent (i.e. a valid licensing arrangement). Getty’s claim is that the synthetic images reproduced by Stable Diffusion reproduce in substantial part its own copyright works. This is in part substantiated by the fact that the model appears in some instances to replicate a Getty Image watermark in some of its AI-generated images.

The case is scheduled for listing in summer of 2025, assuming it does not settle before then.

Top

3 . Data

Top

3.1. Domestic data law treatment

Except for personal data privacy, the UK government has so far taken a hands-off approach to data law. A proposed Data Protection and Digital Information Bill which would have generally reformed the UK version of the General Data Protection Regulation (GDPR) failed due to the prorogation of Parliament because of the 2024 UK general election.

Top

3.2. General Data Protection Regulation

The UK GDPR is largely identical to the EU GDPR but localised to the UK. The Data Protection Act 2018 (DPA) enacts the UK GDPR, makes ancillary provisions (e.g., conditions for processing and legality of processing), and regulates personal data processing by police and intelligence services.

The DPA also creates criminal offences, including an offence of knowingly or recklessly re-identifying (without consent) personal data which has been de-identified, or processing it. This has relevance for AI systems whose correlation and prediction systems may inadvertently re-identify individuals in the course of sifting their databases, even if that is not an explicit function.

The UK GDPR has extra-territorial reach, covering, for example, those processing outside the UK personal data about individuals in the UK to whom they are offering goods/services, or whose behaviour they are monitoring.

Consistent with the EU GDPR, one of the main provisions specifically relevant from the AI perspective are the restrictions on automated decision-making where this has serious consequences for individuals, and the associated obligation to provide meaningful information about the logic of the processing operation. The Information Commissioner’s Office (ICO) has produced guidance on this (see www.ico.org.uk/for-organisations/guide-to-data-protection/guide-to-the-general-data-protection-regulation-gdpr/automated-decision-making-and-profiling). See also Section 4, below.

UK data reform

As noted above, the government’s vaunted changes to the GDPR have failed due to the prorogation of parliament in the run up to the UK general election. This notwithstanding, the UK’s data protection regulator, the ICO, launched a series of consultations on generative AI and data protection. These consultations (all of which have now closed) are designed to elicit views on the following areas of focus (in part as a response to the Government’s AI White Paper, see below, Section 6):

  • What is the appropriate lawful basis for web-scraping to train generative AI models?
  • How can the purpose limitation principle be applied in the context of generative AI development and deployment?
  • What are the expectations around accuracy of training data and model outputs?
  • What are the expectations in terms of complying with individual data subject rights?

The ICO was intending to update its guidance to include these and other factors (including a separate study on biometric classification) in spring 2025, following publication of the Data Protection and Digital Information Bill; however, this timeline has now been thrown into some doubt due to the failure of that bill.

Top

3.3. Open data and data sharing

As part of its National AI and Data strategies, the government wishes to encourage data sharing to improve outcomes for individuals and increase innovation. It has adopted a policy of “Open by Default” for public sector data across all government departments. 

Top

3.4. Biometric data: voice data and facial recognition data

Voice data

As is the case with the EU GDPR, voice data which is specifically processed for the purposes of identifying individuals is personal biometric data, and hence subject to the additional constraints on use inherent in processing special category personal data.

The ICO has taken action against use of biometric voice recognition systems by Her Majesty’s Revenue and Customs (HMRC). HMRC asked customers to record a set phrase in order to use its voice authentication service, which allowed the customer’s voice to be used as a secure password for access. Over seven million recordings were collected. The ICO held that HMRC had no lawful basis for collecting the voice data, and issued an enforcement notice instructing it to delete the voice recording from its systems (except where it had user consent) and to require its suppliers to do likewise (see www.ico.org.uk/action-weve-taken/).

The ICO has issued guidance on video surveillance systems, making it clear that voice recording is rarely justifiable and that any sound recording functionality in surveillance equipment should normally be off by default (see www.ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/guidance-on-video-surveillance/how-can-we-comply-with-the-data-protection-principles-when-using-surveillance-systems).

Facial recognition data

Alongside human rights law, the most important source of regulation of AFR is the UK GDPR, whose provisions relevant to AFR data are (at the time of writing) the same as those of the EU GDPR. Facial images are personal data, and, when processed for the purposes of identifying individuals, they are special category data under the UK GDPR.

Use of AFR is an area of particular concern and focus for the ICO. The Bridges v. South Wales Police case made clear that use of AFR is in principle lawful, but subject to tight constraints. Following that case, the ICO issued an Opinion on the data protection aspects of AFR in public places, stressing the importance of data protection impact assessments, appropriate legal bases, transparency, data minimisation, and the involvement of humans in the process (see www.ico.org.uk/media/2619985/ico-opinion-the-use-of-lfr-in-public-places-20210618.pdf).

In 2022, the ICO took action against Clearview AI Inc., a provider of AFR systems, fining them over GBP 7.5 million and ordering that they delete all images of UK subjects from their database (see https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2022/05/ico-fines-facial-recognition-database-company-clearview-ai-inc/). Clearview provided in the UK a system which allowed organisations such as law enforcement to check facial images against a vast database of images which it had scraped from various publicly available online sources without the subjects’ knowledge or consent. Clearview argued that it was not subject to the UK GDPR as it was based in the USA. The ICO rejected this argument, stating that the use of images of UK-based data subjects entailed monitoring, (thus bringing Clearview within the scope of the UK GDPR), and holding that Clearview were joint data controllers with the UK organisations deploying the systems. As of the end of 2023, Clearview AI has successfully appealed to the Information Rights Tribunal, which overturned ICO’s initial decision. ICO has signalled in turn its intention to appeal the Tribunal’s decision (see https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/11/information-commissioner-seeks-permission-to-appeal-clearview-ai-inc-ruling).

The ICO’s Surveillance Camera Code of Practice (see www.gov.uk/government/publications/update-to-surveillance-camera-code/amended-surveillance-camera-code-of-practice-accessible-version) applies where AFR is deployed in surveillance camera systems, and there are several other potentially relevant laws, regulations and codes of practice (for example: Article 8 of the ECHR, the UK’s Human Rights Act, The Protection of Freedoms Act 2012, The Regulation of Investigatory Powers Act 2000, the Intelligence Services Act 1994, The Private Security Act 2001 and the Police Act 1997).

Top

4 . Bias and discrimination

Algorithmic bias 

The UK Department for Science, Innovation & Technology has released guidance to assist organisations that use AI in recruitment – this guidance highlights sources of bias in AI systems, including learned bias and inaccuracy (see the “Responsible AI in Recruitment guide“ on www.gov.uk/). Algorithmic bias can also result from use of non-representative data to train AI systems; for example, an insufficiently diverse and representative data set. Serious risks for fundamental rights may arise where AI systems operate on the basis of such biased data. 

Top

4.1. Domestic anti-discrimination and equality legislation treatment

Equality Act 2010

The right to equal treatment and non-discrimination is a fundamental principle given specific effect in the UK by the Equality Act 2010 (Equality Act) which applies to those providing services to the public. 

Where use of biased or otherwise flawed training datasets skews AI system outputs in a way which disadvantages one of the legally “protected characteristics” under the Equality Act – age, disability, gender reassignment, marriage and civil partnership, pregnancy and maternity, race, religion or belief, sex and sexual orientation – bias can become illegal discrimination (Chapter 1 Equality Act 2010). 

To determine whether there has been unlawful discrimination under the Equality Act, it is necessary to decide whether there has been prohibited conduct in respect of one of the “protected characteristics” set out above.

Indirect discrimination 

Organisations deploying algorithms do not need to intend to discriminate for their conduct to be unlawful (see “The benefits and harms of algorithms: a shared perspective from the four digital regulators” on www.gov.uk). Indeed, indirect discrimination is likely to be most relevant where AI technology is concerned. 

Ordinarily, indirect discrimination occurs where an organisation adopts a “provision, criterion or practice” (PCP) that puts protected people at a disadvantage (section 19 Equality Act 2010). Use of an algorithm within an AI system may be seen as a PCP within the meaning of the Equality Act and thereby within the remit of indirect discrimination rules (see www.equalityhumanrights.com/sites/default/files/servicescode_0.pdf). For example, if a dataset is used to train an adaptive algorithm-based AI in such a way as to cause the AI to show adverts for high-paying jobs more often to men than to women, it can be regarded as a PCP which places women at a disadvantage within the meaning of the Equality Act. A woman who would have applied for the high-paying role, but couldn’t because she was not shown the advert, may be within the protection of the Equality Act.

The sophistication of pattern-learning AI systems means that even if restrictions are set on the algorithms produced, for example to ignore protected characteristics like sex, this may not solve the problem, as the AI may instead identify proxies for such characteristics. 

The “black box” nature of some AI decision-making means that users may not realise that discrimination is occurring, or be able to tell which organisation contributed to any discriminatory features (e.g., was it those who formulated the algorithm, those who supplied the initial training dataset, or the AI-user?). 

This lack of transparency can also make it difficult for individuals to identify discriminatory acts and, consequently, to enforce their rights, although discrimination law does allow for inferences of discrimination to be drawn from certain factual scenarios and for the burden of proof to be reversed. 

This lack of knowledge can also be an issue for defendants. Whilst it is possible to defend indirect discrimination claims on the basis that apparently illegal discrimination is in fact objectively justified, understanding the way that AI systems are making decisions will be essential to support a defence of objective justification.

Duty to make reasonable adjustments 

Disabled people may face particular disadvantages in engaging with automated processes. For example, in a recruitment context, some systems evaluate a job candidate’s facial expressions, eye contact, tone of voice and language. This can put candidates with visual or hearing impairments, those with a neurodiversity or those with facial disfigurements at a disadvantage. Given the obligation under sections 20 and 21 of the Equality Act to make reasonable adjustments to remove disadvantages for disabled individuals, an organisation could potentially breach discrimination laws by using AI software as a blanket approach. It is crucial for organisations to ensure that if the software cannot adapt its response to accommodate disabilities, a human can intervene. 

Data protection – Automated Decision Making (ADM)

The UK GDPR (Article 22) restricts the ability to make decisions about individuals based on automated data processing if this “produces legal effects” on, or “similarly significantly affects”, the individual, unless there is direct human involvement.

A “legal effect” is something which affects an individual’s legal status/rights (for example, rights to sickness pay). What constitutes “similarly significant affects” is more nebulous, but guidance points to it meaning something which significantly affects the circumstances, behaviour or choices of individuals, having a prolonged or permanent impact and, at the extreme, leading to the exclusion or discrimination against individuals. For example, AI systems used in e-recruiting practices (such as use of automated psychometric testing to filter out candidates) may well be subject to these restrictions.

Top

5 . Cybersecurity and resilience

Top

5.1. Domestic technology infrastructure requirements

The UK has a range of legislation which is designed to protect national infrastructure and assets:

Prior to the UK’s withdrawal from the EU, it had enacted the Networks and Information Systems (NIS) Directive by way of the NIS Regulations. These apply to companies and organisations identified as Operators of Essential Services (OES), outsourced IT and managed service providers (MSPs), essential service providers, such as energy, transport, healthcare and water companies and providers of important digital services, such as cloud computing and online search engines.

The new EU NIS2 Directive which entered into force in January 2023 is not applicable to the UK and is potentially a major divergence between the EU and UK regimes on cybersecurity, although changes have been proposed to the UK’s NIS Regulations to introduce a greater degree of alignment between the EU and the UK. The most recent proposal to update the NIS regulations (dated December 2022) includes proposals, amongst other things, to:

  • bring MSPs into scope of the regulations to keep digital supply chains secure; 
  • improve cyber incident reporting to regulators; and
  • enable the Information Commissioner to take a more risk-based approach to regulating digital services.

The UK’s National Security and Investment Act 2021 (NSIA) establishes a regime for the UK Government to intervene in commercial transactions in situations where it deems the UK’s national security to be at risk. The NSIA regime requires the submission of a mandatory filing to the Department of Business, Energy and Industrial Strategy (BEIS) if the target entity in a transaction falls within one or more of the 17 “high risk” designated sectors under the Act. Artificial Intelligence is one of these “High Risk” sectors due to the potential for it to be used for harmful and/or military purposes.

Finally, in November 2023 the UK Government played host to an AI Safety summit held at Bletchley Park, the home of Alan Turing’s wartime computing efforts to decode the Enigma machine. The summit was attended by 28 countries, together with the EU, as well as leading developers of AI, including Microsoft, xAI (Elon Musk), Google DeepMind and Meta. This resulted in the so-called Bletchley Declaration which focused on the potential existential risks of AI frontier models. The signatories of the Bletchley Declaration agreed to co-operate and share information on an ongoing basis to ensure appropriate co-ordinated risk management.

Top

6 . Trade, anti-trust and competition

The UK’s core competition/anti-trust regime is contained in the Competition Act 1998 (CA 1998) and is regulated by the Competition and Markets Authority (CMA). It is based upon two basic principles:

  • Chapter 1 CA 1998 prohibits “agreements between undertakings, decisions by associations of undertakings or concerted practices which may affect trade within the United Kingdom, and have as their object or effect the prevention, restriction or distortion of competition within the United Kingdom”. Agreements between competitors are most likely to infringe Chapter 1.
  • Chapter 2 CA 1998 prohibits “conduct on the part of one or more undertakings which amounts to the abuse of a dominant position in a market … if it may affect trade within the United Kingdom”. In effect, this imposes responsibilities upon dominant companies not to act in a way that distorts competition.

Competition law also interacts closely with UK consumer law, particularly following the enactment of the Digital Markets, Competition and Consumers Act 2024 (DMCCA), which gives the CMA enhanced powers to regulate digital markets under competition and consumer law, as well as bespoke new digital markets powers (explained in more detail below). 

Top

6.1. AI related anti-competitive behaviour

Algorithmic collusion

There are two main forms of “algorithmic collusion” (see web-archive.oecd.org/2019-02-17/449397-Algorithms-and-colllusion-competition-policy-in-the-digital-age.pdf). Firstly, AI-embodied algorithms enable more price transparency and high-frequency trading, enabling competitors to react quickly, which could lead to collusive strategies. Secondly, companies can use deep learning techniques to monitor prices, implement common policies, send market signals or optimise joint profits. These AI tools can facilitate tacit collusion between competitors, resulting in anti-competitive outcomes such as price co-ordination.

Personalised pricing

Algorithmic systems enable companies to offer different prices to different customers depending on the information they hold about them, for example offering higher renewal prices to customers identified as being more likely to renew with the same company. This “personalised pricing”, if directed at consumers, may infringe the Consumer Protection from Unfair Trading Regulations 2008 (CPUT) (which will be replaced by a similar regime in the DMCCA once commencement regulations are passed) if it amounts to an unfair commercial practice (i.e., is contrary to professional diligence in a way which would be likely to materially distort the economic behaviour of a consumer). 

Personalised search rankings

The CMA also highlights that personalised search rankings (where algorithmic systems facilitate preferences for particular services, products or suppliers) may potentially lead to negative outcomes for consumers by manipulating their decision-making. Personalised rankings based on protected characteristics (e.g., age, disability, sex, or race) could amount to unlawful discrimination and breach equality legislation (see also Section 4, above).

It may also breach consumer protection law in relation to the protections awarded to vulnerable consumers under CPUT and other legislation as a result of consumers’ protected characteristics (such as age or disability) or being “situationally vulnerable” (for example, bereaved individuals targeted by funeral providers).

Dark patterns 

“Dark patterns” are practices designed to influence users into making commercial decisions (e.g., buying or signing up) to their detriment, such as, for example, using AI systems to target users with messaging designed to create a sense of urgency (e.g., stating that there is only limited availability). These messages are contrary to CPUT if they are untrue, misleading, or otherwise amount to undue influence, or unfair, aggressive or coercive commercial practices.

The CMA has repeatedly enforced in this area. Once the DMCCA commences, the CMA will have increased powers to directly enforce against traders in respect of conduct that breaches the unfair commercial practices regime.

Abuse of dominance

Companies in a dominant position have a greater potential to use AI in a manner that may distort competition and therefore breach the CA 1998. 

For instance, competition is likely to be harmed where a dominant platform’s algorithms favour its own products and services over those of rivals. This may have the effect of promoting the dominant company’s products or services such that it does not need to compete on its own merits. 

Dominant players may have an advantage if their size gives them access to larger data pools than those of their competitors. In digital markets, the user data that certain players have access to, often combined from multiple channels, power the AI systems which deliver targeted advertising. This data, to which competitors do not have access, is a key barrier to competition for challenger players trying to compete with the larger firms. It is expected that future UK legislation will seek to tackle this problem.

Merger control

The CMA has suggested that commercial agreements that make one company dependent on another could trigger a UK merger review. This is particularly relevant in the AI sector, where many innovative AI companies seek investment from commercial giants like Amazon or Apple. For example, at the end of 2023, the CMA began an examination of the partnership between Microsoft and OpenAI.

More recently, in April 2024, the CMA began to consider whether Microsoft’s partnership with Mistral AI resulted in the creation of a relevant merger situation under the Enterprise Act 2002. Ultimately, the CMA decided that the partnership did not qualify for investigation under the relevant merger rules.

However, the fact that the CMA clearly views AI investment as falling within its merger control powers is a concern to some AI developers, who worry that the CMA’s approach could lead to a chilling effect on investments. This, of course, is a rapidly developing area of technology and more involvement from the CMA is expected going forward. 

Top

6.2. Domestic regulation

There is currently no specific UK legislation that regulates AI from a competition/anti-trust perspective, but use of AI will be caught by the competition rules where it results in a breach of the CA 1998. The CMA has published two papers analysing potential harms caused by algorithmic systems and stated that it intends to work closely with other regulators to develop its work in relation to anti-competitive use of AI systems (see CMA working paper “Pricing algorithms: Economic working paper on the use of algorithms to facilitate collusion and personalised pricing, 2018 and CMA paper “Algorithms: How they reduce competition and harm consumers”, which focuses on the potential harm caused by the use of algorithms by market participants).

The CMA has also indicated that it could use its new powers under the DMCCA to safeguard consumers and ensure effective competition in relation to AI.

Whilst the DMCCA is not AI-specific legislation, it gives the CMA wide powers to designate firms as having “strategic market status” in relation to their digital activities (after having conducted an investigation to determine this). In practice, this means that the larger AI firms are likely to be in-scope for a potential designation, thereby becoming an “SMS firm”, which in turn places additional responsibilities upon them.

An SMS firm is likely to be subjected to “conduct requirements”. These are mandatory directions issued by the CMA that the SMS firm must comply with in order to protect competition and/or consumers. The CMA may also impose additional conduct requirements requiring an SMS firm to refrain from certain practices which harm (or could harm) competition and/or consumers.

CMA’s review of foundation models (FMs)

Foundation models underpin much of the currently available generative AI technology. In 2023, the CMA launched an initial review to create an understanding of the market and the opportunities and risks for competition and consumer protection. The CMA identified three key interlinked risks to fair, open, and effective competition. First is the potential for firms controlling critical inputs to develop FMs to restrict access to shield themselves from competition. Second is the potential for powerful incumbents to exploit their position in consumer- or business-facing markets to distort choice in FM services and restrict competition. Third is the potential for partnerships involving key players to exacerbate existing positions of market power. 

In April 2024, the CMA published an update paper (the Update Paper) to account for a range of developments across the FM ecosystem since its initial review in 2023. In the Update Paper, the CMA expressed its concerns that the FM sector is developing in ways that risk negative market outcomes. In particular, the CMA is concerned that a small number of incumbent technology firms, which have existing positions of market power, could profoundly shape FM-related markets to the detriment of fair open and effective competition. In its Update Paper, the CMA urged tech firms to align with the CMA’s AI Principles to ensure effective competition that benefits consumers, businesses and the wider society.

The CMA is almost certain to take a keen interest in the regulation of FMs going forward, and has recognised that artificial intelligence is a rapidly developing area. The CMA can be expected to utilise its existing powers, as well as new powers under the DMCCA, to regulate the ongoing utilisation of AI foundation models.

Top

7 . Domestic legislative developments

Top

7.1. Proposed and/or enacted AI legislation

At the date of writing, no AI-specific laws exist in the UK and the UK has so far taken a very light touch approach to the topic of AI regulation. However, the regulatory approach continues to evolve at pace.

Top

7.2. Proposed and/or implemented Government strategy

National AI Strategy 

In March 2023, the UK government published its white paper, “A pro-innovation approach to AI regulation” (AI White Paper), followed by a consultation response to the AI White Paper published in February 2024. The AI White Paper and consultation response together reflect the current UK position that no AI legislation will be introduced. Rather, the UK has thus far focused on a decentralised and sectoral led approach, centred on five high level principles designed to guide UKregulators:

  • safety, security and robustness;
  • appropriate transparency and explainability;
  • fairness;
  • accountability and governance; and
  • contestability and redress.

While UK regulators were asked to publish updates on their own implementation of the above principles, the principles are not underwritten by a statutory duty. The response of the regulators, published in April 2024, varied broadly in both approach and breadth. 

Prior to Parliament’s dissolution in May 2024, the Science, Innovation and Technology Committee published its last report as part of its inquiry into the governance of AI. The recommendations reported were stated to apply to whoever was in government after the July 2024 General Election and suggested that the next government should be prepared to introduce new AI-specific legislation if it encounters any gaps in the powers of any of the regulators. 

The incoming new Labour government has indicated that it will publish legislation to regulate the ‘most powerful’ AI models. Currently the form and scope of this legislation remains unclear. 

National Data Strategy

The government launched an ambitious “National Data Strategy” in 2020 together with a public consultation (see www.gov.uk/government/publications/uk-national-data-strategy). Whilst the Strategy is not specific to AI systems, the importance of data for the creation and operation of AI systems is acknowledged to be a key driver of data strategy, in particular for the government’s key mission of “Unlocking the value of data across the economy”. In its response, the Department for Digital, Culture, Media and Sport (DCMS) highlighted the scope to “capitalise on [the UK’s] independent status and repatriated powers” following the UK leaving the EU, but also the need to “maintain interoperability” with other regimes for businesses which operate across borders (see www.gov.uk/government/consultations/uk-national-data-strategy-nds-consultation/outcome/government-response-to-the-consultation-on-the-national-data-strategy).

Data: a new direction

In 2021, DCMS conducted a public consultation on proposals to reform aspects of UK data protection laws in order to provide a more flexible regime and to shift away from the “one size fits all” approach to compliance inherited with the UK GDPR following the UK’s withdrawal from the EU. However, the Data Protection and Digital Information Bill introduced following that consultation was not passed prior to the prorogation of Parliament. While the next government may reintroduce the Bill, it is under no obligation to do so.

Top

8 . Frequently asked questions

8.1. Will the EU’s forthcoming Artificial Intelligence Act (AIA) apply to the United Kingdom?

At the present time there are no plans for the UK to adopt the AIA. Since the UK exited the EU in January 2020 it is not obliged to implement EU law. This of course means that the AIA will not directly apply to UK businesses. However, the AIA applies on an extraterritorial basis, including to UK businesses that place on the market or put into service AI systems or general purpose AI models in the EU and UK-based providers and deployers of AI systems where the output produced by the AI system is used in the EU. As the EU is however one of the UK’s largest export markets, such extraterritorial application is likely to apply to many UK-based businesses operating within the AI Value Chain.

8.2. Are there plans for the UK to legislate on the topic of Artificial Intelligence? 

See above, Section 7. Currently it seems unlikely that the UK will bring forward a comprehensive measure similar to the AIA, and it looks like the prior government’s sector-based approach remains valid. However, the UK’s new Labour government has signalled that it will legislate to regulate the ‘most powerful’ AI models. The form and scope of this legislation remains unclear. 

8.3. What should I be doing in the UK in relation to either the use or development of AI systems to prepare for regulation? 

At time of writing, it seems likely that, even though future UK legislation will be limited to the ‘most powerful’ AI models, the overall approach adopted will be similar in theme to that of the EU, in part also due to the extreme extra territoriality of the EU Artificial Intelligence Act. There is very likely to be an emphasis on the use of ethical or “responsible” AI. In practical terms this means that you should consider undertaking an impact assessment (in a similar manner to the requirements specified in relation to data under the UK GDPR) to ensure that you have considered and mitigated (to the extent practicable) all of the actual risks which could potentially occur in the development and use of your AI system. At the very least, this impact assessment should take into account specific AI-related ethical issues, such as: the potential for bias to arise in the system (and the extent to which that can be corrected); dealing with transparency issues (understanding how the AI system makes decisions and providing appropriate instructions and guidance for use); and addressing accountability (making sure that there are appropriate human decision makers “in the loop”).

EXPERT ANALYSIS

Chapters

Australia

Kit Lee
Philip Catania

Austria

Sonja Dürager

Belgium

Benjamin Docquir

Canada

Charles Morgan
Daniel Glover
Dominic Thérien
Erin Keogh
Francis Langlois
Jonathan Adessky
Karine Joizil
David Tait
Eugen Miscoi
Kendra Levasseur

China

Lewis Chen
Xinyao Zhao

European Union

Benjamin Docquir

Germany

Alexander Tribess

Iceland

Lára Herborg Ólafsdóttir

Ireland

Barry Scannell
David Cullen
Jordie Sattar
Leo Moore

Italy

Enrico Fabrizi
Federico Ferrara
Gianluigi Marino

Netherlands

Coen Barneveld Binkhuysen
Joanne Zaaijer

Spain

Rafael García del Poyo

Switzerland

Martina Arioli

Turkey

Begüm Alara Şahinkaya
Burak Özdağıstanli
Göksu Tuğrul
Hatice Ekici Tağa
Sümeyye Uçar

United States

David V. Sanker, Ph.D

Powered by SimSage

Jobs from Nicholas Scott

3-6 PQE Corporate M&A Associate

Job location: London

Projects/Energy Associate

Job location: London

Popular Articles

Latest Articles

US real estate financing business Walker & Dunlop promotes deputy to GC

2d

Ex-Freshfields partner suspended by tribunal over allegations of ‘inappropriate’ behaviour

2d

DWF rebuilds in Australia with nine-partner raid on Hall & Wilcox

2d

Willkie Farr advises Which? on billion-pound cloud storage class action claim against Apple

2d

Memery Crystal and BCLP advise Oberoi Group and Grosvenor on London luxury hotel launch

3d