Oct 2024

Australia

Law Over Borders Comparative Guide:

Artificial Intelligence

Contributing Firm

Introduction

The Australian Government is committed to becoming a world leader in the development and adoption of trusted, secure and responsible artificial intelligence (AI). While Australia’s AI industry is in its nascent stages, a 2023 McKinsey & Company report, titled “Australia’s automation opportunity: Reigning productivity and inclusive income growth”, estimated that AI technologies will contribute AUD 600 billion a year to the Australian economy by 2030. 

The Australian Government’s Digital Economy Strategy, which aims to position Australia as a global leader in AI technology by 2030, includes a targeted AUD 124.1 million AI Action Plan. The AI Action Plan, which was established in June 2021, includes the following initiatives to realise this vision:

  • the National AI Centre within Commonwealth Scientific Industrial Research Organisation’s (CSIRO) Data61 to coordinate Australia’s AI expertise and capabilities to address barriers for small and medium enterprises (SMEs) in adopting and developing AI;
  • the Next Generation AI Graduates program to attract and train AI specialists through a national scholarship program in collaboration with Australian universities and industry bodies; and
  • a number of grant programs to support regional development of AI and SME adoption of AI (e.g., the AI and Digital Capability Centres grant).

There is no dedicated legislative regime in Australia regulating AI, big data or any form of automated decision-making processes. However, the Australian Government has published:

  • a Discussion Paper titled “Safe and Responsible AI in Australia” in June 2023, in which it examines potential avenues for AI reform and reflects on the approach adopted in other jurisdictions, and its interim response to the industry submissions received in relation to that Discussion Paper (published 17 January 2024); and
  • a Proposals Paper for introducing mandatory guardrails for AI in high-risk settings (published 5 September 2024), which sets out substantive obligations that are proposed to apply to developers and deployers of high-risk AI and general-purpose AI.

Currently, Australia’s existing non-technology-specific laws govern the use of AI technologies, such as those relating to the deployment of facial recognition technologies involving face-matching AI tools and the ownership of AI-generated works or inventions. It is expected that these gaps will be addressed in a future Australian AI law. 

Aside from existing laws, there are a number of government publications which will guide the future development of Australia’s AI regulations, including:

  • The Australian Human Rights Commission’s 2021 Human Rights and Technology report, which sets out a number of key responsible AI recommendations.
  • The CSIRO and Data61’s 2019 AI Technology Roadmap, which identifies potential areas of AI specialisation for Australia.
  • Standards Australia’s 2020 AI Standards Roadmap, which provides a framework for the development of future standards with respect to the use of AI in Australia.
  • The Australian Department of Industry, Science, Energy and Resources’ 2019 AI Ethics Framework, which sets out AI Ethics Principles, as follows: 
    • “human, societal and environmental well-being” which purports that AI systems should benefit individuals, society and the environment, and the impacts of AI systems should be accounted for throughout its lifecycle;
    • “human centred-values” which purports that AI systems should respect human rights, diversity and the autonomy of individuals;
    • “fairness” which purports that AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups;
    • “privacy protection and security” which purports that AI systems should respect and uphold privacy rights and ensure the security and protection of data;
    • “reliability and safety” which purports that AI systems should reliably operate in accordance with their intended purpose;
    • “transparency and explainability” which purports that there should be transparency and responsible disclosure so people can understand when they are being significantly impacted by AI, and can find out when an AI is engaging with them;
    • “contestability” which purports that when an AI system significantly impacts a person, community, group or environment, there should be a timely process to allow people to challenge the use or outcomes of the AI system; and
    • “accountability” which purports that people responsible for the different phases of the AI system life cycle should be identifiable and accountable for the outcomes of the AI systems, and human oversight of AI systems should be enabled.
Top

1 . Constitutional law and fundamental human rights

Top

1.1. Domestic constitutional provisions

The Australian Constitution does not contain any express provisions in respect of AI. The Australian Government is empowered under section 51(v) of the Australian Constitution to establish legislation regulating “postal, telegraphic, telephonic and other like services”. It may be that this head of power may be used to establish legislation relating to AI.

Separately, there are limited human rights provisions enshrined in the Australian Constitution which might operate to restrict the use of AI technologies in Australia or otherwise protect individuals’ rights from a constitutional perspective. 

Top

1.2. Human rights decisions and conventions

The Australian Human Rights Commission (HRC) is a leading figure guiding the development of AI regulation, and this reflects the overall sentiment that human rights considerations are likely to be central to the adoption of AI in Australia. In the HRC’s 2021 Human Rights and Technology Final Report (HRC Report), the HRC proposed a number of recommendations including:

  • The creation of an independent regulator (AI Safety Commissioner) to promote safety and protect human rights in the development and use of AI. In particular, the HRC advocated that the AI Safety Commissioner should be empowered to assess the impact of the development and use of AI on vulnerable and marginalised people in Australia.
  • Mandating the completion of Human Rights Impacts Assessments in respect of government use of AI, and requiring government agencies to provide notice regarding their use of AI. Furthermore, individuals subjected to government decisions made with the use of AI should be provided with a right to reasons explaining the basis of the decision, and recourse to an independent merits review tribunal in respect of such decisions.
  • Encouraging private sector organisations’ use of the AI Ethics Principles framework when developing and deploying AI technologies.
  • A general moratorium on the use of biometric technologies in the context of high-risk decision making, subject to further reform to ensure better human rights and privacy protections regarding the use of such technologies. 
Top

2 . Intellectual property

Top

2.1. Patents

The main source of law governing patents in Australia is the Patents Act 1990 (Cth) (Patents Act). There are no express references to AI under the Patents Act. However, there has been contention as to whether an AI system can be named as an “inventor” to which a patent is registered. In Thaler v. Commissioner of Patents [2021] FCA 879, the court originally held that the relevant AI system (i.e. DABUS) could be considered an “inventor”, within the definition of section 15(1) of the Patents Act, on the basis that an “inventor is an agent noun” and “an agent can be a person or a thing that invents”. 

On appeal, the Full Court of the Federal Court of Australia in Commissioner of Patents v. Thaler [2022] FCAFC 62 overturned the original decision, on the basis that an “inventor”, within the meaning of section 15(1) of the Patents Act, had to be a “natural person”, citing the historical role of an inventor in patent law, plain reading of the section and structure and policy objectives of the Patents Act. Dr Thaler sought special leave to the High Court of Australia. However, the High Court of Australia refused the application after hearing oral arguments on 11 November 2022. 

While AI is not capable of being named as an inventor as a result of the High Court’s decision, this may be a prelude to the imminent policy debate in Australia for inventions involving AI. Interestingly, the Full Court considered:

  • whether an “inventor” should be redefined to expressly include AI, and if so, to whom such an AI-invented patent could be granted, and the standard of the inventive step that should be applied; and
  • that its decision did not necessarily preclude the granting of patents from AI-devised inventions in another case.
Top

2.2. Copyright

The main source of law governing copyright in Australia is the Copyright Act 1968 (Cth) (Copyright Act). There are no express references to AI under the Copyright Act. However, to the extent that an AI algorithm is written (e.g., as represented in software as source code), the software will be considered a “literary work” and potentially subject to the protections under the Copyright Act, including a prohibition on unauthorised reproduction.

In respect of computer-generated works, the Copyright Act restricts the provision of copyright protection to works originating from an “author” — that is, a person who brings the work into existence in its material form. Specifically, an author must be a human person and any works emerging from the operation of a computer system cannot originate from an individual (see Telstra Corp Ltd v. Phone Directories Co Pty Ltd [2010] FCA 44 (Telstra)). In Telstra, the Court held that phone directories which had been largely organised and presented by a computer program were not subject to copyright protection as the compilation did not originate from an individual (i.e., there was an absence of human authorship). 

Whether works generated by both a human author and a computer program together will be subject to copyright protection will depend on:

  • the authorial contribution of the person;
  • the control the person exerts over the final material form of the work; and 
  • the extent to which the relevant computer program is used as a “tool”.
Top

2.3. Trade secrets and confidentiality

There are no specific trade secrets or confidentiality law requirements in respect of AI. 

Top

2.4. Notable cases

See above, Sections 2.1 and 2.2, for summaries of Thaler and Telstra.

Top

3 . Data

Top

3.1. Domestic data law treatment

The main source of law governing privacy in Australia is the Privacy Act 1988 (Cth) (Privacy Act). As at the date of writing, the Privacy Act does not contain any express provisions which directly regulate the use of AI. However, the Privacy Act may regulate the use of data used in AI systems to the extent that such data is “personal information” — essentially, information about a reasonably identifiable individual. For example, an organisation which uses personal information for input in an AI system must comply with the requirements under the Privacy Act, notably the Australian Privacy Principles (APP) (i.e., the personal information must be used for the primary purpose for which it was collected, or a secondary purpose within the reasonable expectation of the identified individual). 

The Privacy Act is currently undergoing significant reform which is expected to be implemented across 2024-2025.  The first tranche of proposed changes to the Privacy Act were released on 12 September 2024 in the form of the Privacy and Other Legislation Amendment Bill 2024. The Government has agreed in its Privacy Act Review Report (Government Response) to introduce requirements relating to Automated Decision Making (ADM) which broadly refers to the application of automated systems in any part of the decision-making process. In particular, amendments to the Privacy Act will likely:

  • require organisations to uplift their privacy policies in Australia by setting out the types of personal information that will be used in substantially automated decisions which have a legal, or similarly significant, effect on an individual’s rights and what types of decisions will adopt the use of ADM;
  • grant individuals a right to request meaningful information about how automated decisions with legal or similarly significant effect are made; and 
  • require APP entities to conduct a Privacy Impact Assessment prior to undertaking “high risk activities” (i.e., those activities likely to have a significant impact on the privacy of individuals, which include ADM).
Top

3.2. General Data Protection Regulation

The European General Data Protection Regulation will apply to Australian organisations that fall within the extraterritorial ambit of the GDPR. 

Top

3.3. Open data and data sharing

There are “open data” legislative regimes in Australia that enable the sharing of data which may support the development and adoption of AI technologies in Australia. 

The Data Availability and Transparency Act 2021 (Cth) (DAT Act) facilitates the sharing of “public sector data” (meaning data that is lawfully created, collected or held by or on behalf of a Commonwealth body) with government departments and universities to stimulate the use of public sector data for prescribed purposes, including research and development. The DAT Act sets out a comprehensive accreditation framework and establishes requirements in order for accredited users to access the relevant datasets (e.g., the use must align with the “data sharing purposes” and must generally be consistent with the data sharing principles). 

Separately, the Consumer Data Right (CDR) was enacted by the Treasury Laws Amendment (Consumer Data Right) Act 2019 (Cth) amending the Competition and Consumer Act 2010 (Cth). Essentially, the CDR grants consumers the right to access their data held about them by businesses or “data holders” in prescribed regulated industries (e.g., energy, banking and telecommunications) and to have that data transferred to an accredited recipient. 

Top

3.4. Biometric data: voice data and facial recognition data

The use of biometric data, including voice and facial recognition data, is currently regulated under the Privacy Act which prescribes that biometric data used for the purpose of automated biometric verification or biometric identification, and the use of biometric templates are to be considered “sensitive information”. There are requirements regarding the collection, use and disclosure of sensitive information, for example:

  • an organisation to which the Privacy Act applies must not collect sensitive information unless the relevant individual consents to the collection, and the information is reasonably necessary for one or more of the entity’s functions or activities; and
  • an organisation to which the Privacy Act applies may only use sensitive information if such use is within the reasonable expectation of the relevant individual, and directly related to the primary purpose for which the information was collected.

In 2021, a determination was made by the Office of the Australian Information Commissioner (OAIC) against Clearview AI, Inc. for breaches of the Privacy Act for scraping individuals’ biometric information from the web and disclosing it through a facial recognition tool. The OAIC held that the collection and use of such sensitive information was unreasonably intrusive and unfair, and carried a significant risk of harm to individuals, including vulnerable groups such as children and victims of crime. 

The HRC Report (noted above in Section 1.2) has recommended a moratorium on biometric technologies for use in circumstances where high-risk decision-making is involved, such as in relation to schools and policing. The HRC recommends that this moratorium should stay in place until there is further law reform which sets out express human rights protections regarding the use of biometric technology. 

The HRC has cited a number of key human rights concerns as the basis of its recommendation, including:

  • the high rate of error, especially in the use of one-to-many facial recognition technology which disproportionately affects vulnerable people by reference to characteristics like their skin colour, gender and disability;
  • the increasing rate of facial recognition trials in Australia in high stakes government, including in relation to policing, education and service delivery, and the corresponding increase in human rights risks that potential errors present; and
  • the increased risk of mass surveillance that can affect human rights including freedom of expression and association which stems from the cumulative impact of an increase in facial recognition technologies.

The University of Technology, Sydney is currently developing a report outlining the Model Law for Facial Recognition Systems in Australia, in collaboration with the former Commissioner of the HRC. It is likely that a dedicated facial recognition law will be developed in future. 

Top

4 . Bias and discrimination

The Australian Government and HRC regard bias and discrimination to be a central issue that must be addressed in shaping the development of AI regulation in Australia. The HRC published a 2020 report, “Using Artificial Intelligence to make decisions: Addressing the problem of algorithmic bias”, which is intended to provide guidance to governments and industry bodies on creating and using fairer decision-making processes driven by AI systems.

Separately, the Australian AI Ethics Framework was established to highlight the Australian Government’s commitment to ensuring the use of responsible and inclusive AI. The eight AI Ethics Principles are intended to encourage business and governments employing AI systems to practice the highest ethical standards when designing, developing and implementing AI (see above, Introduction).

Top

4.1. Domestic anti-discrimination and equality legislation treatment

The main source of law governing bias and discriminatory practices in Australia is the Disability Discrimination Act 1992 (Cth) (Discrimination Act). The Discrimination Act does not contain any express references to AI. Instead, the Discrimination Act generally prohibits an organisation from discriminating against a person on the basis of their disability when providing goods and services to that person. To avoid discriminatory conduct, the organisation must take steps to make the relevant goods and services accessible to persons with a disability by making reasonable adjustments to the manner in which goods and services are provided to that person. However, as an exception, an organisation is not required to make such reasonable adjustments or otherwise take action to avoid discriminatory conduct if it would impose an unjustifiable hardship on the organisation.

It is possible that an AI system would be considered a good or service for the purposes of the Discrimination Act. Therefore, the general requirements set out above would apply to the use of an AI system.

Operating in conjunction with the above, there are other anti-discrimination laws which may be relevant to the types of AI that may be developed and their algorithmic content such as:

  • the Racial Discrimination Act 1975 (Cth), which prohibits discrimination on the basis of race, colour, descent, nationality, ethnicity or immigration status;
  • the Sex Discrimination Act 1984 (Cth), which prohibits discrimination on the basis of gender, marital status, or pregnancy; and
  • the Age Discrimination Act 2004 (Cth), which prohibits discrimination on the basis of age.
Top

5 . Cybersecurity and resilience

There are no specific Australian requirements directly or indirectly related to AI.

Top

5.1. Domestic technology infrastructure requirements

N/A

Top

6 . Trade, anti-trust and competition

Top

6.1. AI related anti-competitive behaviour

AI-related anti-competitive behaviour is explored further below.

Top

6.2. Domestic regulation

The main source of law governing trade, anti-trust and competition in Australia is the Competition and Consumer Act 2010 (Cth) (CCA), and compliance with these laws is regulated and enforced by the Australian Competition and Consumer Commission (ACCC).

While there are no specific provisions under the CCA which govern the use of AI technologies, the ACCC has previously considered applicability of AI type technologies in respect of a number of trade and competition law issues in its 2017 publication, “The ACCC’s approach to colluding robots”. The ACCC raised the following issues:

  • The implications of big data in the context of authorising mergers and acquisitions, and the impact of big data in determining market share and barriers to entry — the 2019 Digital Platforms Inquiry report also addressed the anti-trust risks associated with the accumulation of market power with respect to data.
  • The use of price algorithms and potential for collusion or concerted practices. The ACCC raised the possibility of certain machine learning algorithms which compare and set prices in a given market for consumers to produce collusive outcomes and engage in practices such as price-fixing. 
  • The use of anti-competitive algorithms. For example, the proceedings in Google Inc v. ACCC [2013] HCA 1 concerned Google’s deployment of sponsored links in which consumers’ search results would instead produce competitor names who had entered into advertising arrangements with Google. The High Court accepted Google’s appeal and ultimately held that such conduct was not misleading or deceptive on the basis that Google had not itself created the published links.
Top

7 . Domestic legislative developments

There are a number of additional regulatory initiatives carried out by government industry bodies in Australia. In summary: 

  • The Digital Technology Taskforce, as part of the Department of Prime Minister and Cabinet, is undertaking a public consultation process with the aims of publishing an issues paper titled “Positioning Australia as a Leader in Digital Economy Regulation: Automated Decision Making and AI Regulation” in 2022. This paper will identify possible reforms and legislative action to be taken in respect of AI in Australia. 
  • The National Transport Commission’s (NTC) Automated Vehicle Program Approach published in September 2020 outlines the NTC’s planned reforms regarding the use of automated vehicles. In particular, the report considers on-road enforcement actions, insurance arrangements, in-service safety requirements, government access to vehicle-generated data, and sets out guidelines for trials of automated vehicles.
  • Standards Australia published a final report titled “An Artificial Intelligence Standards Roadmap: Making Australia’s Voice Heard” in December 2020 which sets out eight recommendations with respect to AI regulation and best practice standards in Australia. These recommendations include calls to:
    • explore avenues for enhanced cooperation with international bodies, including the United States National Institution for Standards and Technology, with the aim of improving Australia’s knowledge and influence in international AI Standards development; and
    • grow Australia’s capacity to develop and share best practice in design, deployment and evaluation of AI systems with a Standards Hub and security-by-design initiative.
Top

7.1. Proposed and/or enacted AI legislation

There is currently no draft bill or any other dedicated AI legislation in Australia.

Australia’s Online Safety Act 2021 (Cth) establishes mandatory Designated Industry Codes and Standards broadly governing organisations who engage in “online activity”. In November 2023, the draft Designated Internet Services Standard (DIS Standard) was published, and it sets out specific obligations that apply to providers of “high impact generative AI”-designated internet services. These are services which use machine learning models to enable end users to produce synthetic high-impact material such as deepfake films that are or would likely be classified R18+ or X18+. The DIS Standard imposes monitoring and notification obligations to identify and report the presence of prohibited material (e.g., child sexual exploitation material, pro-terror material or extreme crime or violence material) on a provider’s service, and requirements for providers to develop system capabilities to escalate and remove prohibited material.

The DIS Standard specifically calls for regulated service providers to implement systems, processes and technologies that detect and identify prohibited materials. The DIS Standard expressly references the implementation of machine learning and artificial intelligence systems that scan for the presence of such materials as those which would enable providers to meet this standard.

The Online Safety Act highlights an interesting interplay as it establishes a targeted regulatory approach to govern the use of AI technologies in an online environment, but also demonstrates the Government’s willingness to openly encourage organisations to self-adopt AI as a tool to assist meeting their compliance obligations. In any event, the introduction of these laws gives rise to an imperative to ensure such laws interoperate and work consistently with any overarching AI law that is introduced. This will be a common theme as the Government has indicated the need to uplift Australia’s existing laws to effectively regulate the use and deployment of AI systems.

Top

7.2. Proposed and/or implemented Government strategy

On 1 June 2023, the Department of Industry, Science and Resources in Australia published a Discussion Paper titled “Safe and Responsible AI in Australia”. The Discussion Paper provided an overview of the key opportunities and challenges presented by AI in the Australian market, and summarised the regulatory approach adopted by other countries around the world in regulating AI, including the European Union (EU) and the United States as a reference for potential reform avenues for AI in Australia.

However, on 5 September 2024, the Department of Industry, Science and Resources published a “proposals paper for introducing mandatory guardrails for AI in high-risk settings” (Proposals Paper). This Proposals Paper sets out 10 substantive obligations (i.e. mandatory guardrails) that will apply to developers and deployers of “high-risk” AI and general-purpose AI.

These guardrails would require organisations to:

  1. establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance; 
  2. establish and implement a risk management process to identify and mitigate risks; 
  3. protect AI systems, and implement data governance measures to manage data quality and provenance; 
  4. test AI models and systems to evaluate model performance and monitor the system once deployed; 
  5. enable human control or intervention in an AI system to achieve meaningful human oversight; 
  6. inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content; 
  7. establish processes for people impacted by AI systems to challenge use or outcomes;
  8. be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks; 
  9. keep and maintain records to allow third parties to assess compliance with guardrails; and 
  10. undertake conformity assessments to demonstrate and certify compliance with the guardrails.

In terms of scope, the Government proposes to adopt the following definition for “general-purpose AI”, that it “means an AI model that is capable of being used, or capable of being adapted for use, for a variety of purposes, both for direct use as well as for integration in other systems”.

This is the same definition adopted under the Canadian Artificial Intelligence and Data Act, and demonstrates the Australian Government’s commitment to the global interoperability of the proposed AI laws and ensuring those laws are aligned with leading international standards and regulations. 

The Proposals Paper does not commit to a definition for “high-risk AI”. Instead, the Proposals Paper puts forward two potential options for defining “high-risk AI” — a list-based approach, and a principles-based approach. The list-based approach would effectively mirror the definition of the equivalent term adopted under the EU Artificial Intelligence Act — that is, “high-risk AI” in this context would be prescribed and apply to specific industries and applications of AI. 

Alternatively, the principles-based approach attempts to reflect a more nuanced method of identifying regulated AI systems according to certain risk criteria, including with reference to the AI system’s intended and foreseeable uses. Organisations would be required to assess the relevant AI system according to the risk of adverse impacts to:

  • an individual’s rights recognised in Australian human rights law without justification, in addition to Australia’s international human rights law obligations;
  • an individual’s physical or mental health or safety;
  • groups of individuals or collective rights of cultural groups; and
  • the broader Australian economy, society, environment and rule of law;

in addition to considering the adverse legal effects, defamation or similarly significant effects on an individual. The risk of adverse legal effects, defamation or similarly significant effects on an individual must also be considered.

The Proposals Paper also sets out three different regulatory models through which the mandatory guardrails could be adopted under law: 

  • A “domain specific approach” where the guardrails would be incorporated into existing regulatory frameworks as needed (e.g. by establishing requirements under Australia’s privacy, consumer, and copyright laws). 
  •  A “whole of economy approach” involving the introduction of a new cross-economy AI-specific Act (e.g. establishing an Australian AI Act). 
  •  A “framework approach” where new framework legislation would be established to adapt existing regulatory frameworks across the economy. This approach sits between the above two models.

The Government is inviting public submissions on the above regulatory features, closing 4 October 2024. The Government has also released Voluntary AI Safety Standard, alongside the Proposals Paper, which set out voluntary standards organisations are encouraged to comply with and which are intended to provide detailed practical guidance on the safe and responsible use and procurement of AI. The Government has also established an expert advisory board to support the development of options for further AI guardrails, which primarily includes members of public bodies and research institutions.

Separately, the New South Wales (NSW) Government has also published an “AI Assurance Framework” (Framework). From March 2022, NSW Government bodies were required to complete and submit the Framework for review and approval by an AI Review Committee when procuring and using bespoke AI systems in relation to a project or service initiative which exceeds an estimated total cost of AUD 5 million (or where the project or service is funded by the Digital Restart Fund, a whole-of-Government digital transformation initiative). The Framework incorporates principles from NSW’s AI Ethics Principles and requires the relevant bodies to complete a benefits versus risks assessment in respect of the AI system.

Top

8 . Frequently asked questions

8.1. What are the implications of the latest Proposals Paper setting out mandatory guardrails for high-risk AI? 

The Government has committed to establishing mandatory risk-based laws to govern the deployment and development of high-risk and general-purpose AI in Australia that will likely be introduced in 2025. Organisations should refer to the Voluntary AI Safety Standard which sets out voluntary guardrails, as well as the accompanying procurement guidance materials, to determine what may be required from a compliance perspective in future and how best to establish compliant internal AI governance frameworks. 

The voluntary guardrails introduced under the Voluntary AI Safety Standard are closely aligned with the proposed mandatory guardrail requirements which means that organisations achieving a level of compliance with the voluntary standards will already be in a position of substantial compliance with the mandatory guardrails (to the extent they apply).

In the case of deployers/users of AI systems, these organisations should also consider the contractual mechanisms that can already be adopted (e.g. in relation to their arrangements with suppliers and developers of AI) to achieve compliance with the guardrails — the procurement guidelines advise on the key uplifts that may be made to such contractual arrangements. 

8.2. Who is responsible for decisions made by AI systems, and how will liability in respect of such decisions be attributed?

There is no legislative guidance or jurisprudence which expressly deals with the responsibility for the decisions made by AI systems, and any liability that flows from such decisions. For example, there is no standard position as to whether responsibility and liability should fall on the user deploying the AI system, end user of the AI system, or any other parties who have contributed to the development of the AI system (i.e., hardware and software manufacturers, programmers and data supplier). However, the Proposals Paper suggests that a greater onus for compliance may be placed on deployers of AI systems.

The attribution of responsibility and liability in respect of AI systems will likely depend upon how the decision can be traced back through the decision-making matrix to identify the “bad actor” or fault component. This will depend upon the extent to which the factors contributing to the relevant decision can be identified. Future AI laws in Australia will likely include robust and prescriptive requirements with respect to transparency, and the degree to which decisions made by AI systems can be explained which are integral to this evaluative process.

8.3. What owns the output of AI generated creations, and can AI systems own the outputs they produce? 

The issue of ownership with respect to AI is highlighted by the decisions of Thaler and Telstra in the context of AI patent inventions and AI copyrighted works respectively. The cases reveal a gap in the relevant legislative regimes (i.e., the Patents Act and Copyright Act) which are not currently suited to address the increasingly complex and prevalent issue of AI-generated inventions and works, and how the ownership of such IP rights should be designated. In light of these decisions, there is a risk that such inventions and works generated without human intervention or direction will not receive any intellectual property protections or rights.

EXPERT ANALYSIS

Chapters

Austria

Sonja Dürager

Belgium

Benjamin Docquir

Canada

Charles Morgan
Daniel Glover
Dominic Thérien
Erin Keogh
Francis Langlois
Jonathan Adessky
Karine Joizil
David Tait
Eugen Miscoi
Kendra Levasseur

China

Lewis Chen
Xinyao Zhao

European Union

Benjamin Docquir

Germany

Alexander Tribess

Iceland

Lára Herborg Ólafsdóttir

Ireland

Barry Scannell
David Cullen
Jordie Sattar
Leo Moore

Italy

Enrico Fabrizi
Federico Ferrara
Gianluigi Marino

Netherlands

Coen Barneveld Binkhuysen
Joanne Zaaijer

Spain

Rafael García del Poyo

Switzerland

Martina Arioli

Turkey

Begüm Alara Şahinkaya
Burak Özdağıstanli
Göksu Tuğrul
Hatice Ekici Tağa
Sümeyye Uçar

United Kingdom

Amy Moylett
David Cubitt
Joachim Piotrowski
John Buyers
Katherine Kirrage
Tamara Quinn
Tom Sharpe
Emily Tombs

United States

David V. Sanker, Ph.D

Powered by SimSage

Jobs from Nicholas Scott

3-6 PQE Corporate M&A Associate

Job location: London

Projects/Energy Associate

Job location: London

Popular Articles

Latest Articles

US real estate financing business Walker & Dunlop promotes deputy to GC

2d

Ex-Freshfields partner suspended by tribunal over allegations of ‘inappropriate’ behaviour

2d

DWF rebuilds in Australia with nine-partner raid on Hall & Wilcox

2d

Willkie Farr advises Which? on billion-pound cloud storage class action claim against Apple

2d

Memery Crystal and BCLP advise Oberoi Group and Grosvenor on London luxury hotel launch

3d