Post #4: The State of Global AI Regulation

In our previous blog post, we highlighted the primary stakeholders across the AI governance landscape, the unique roles they play in shaping the collective global response to AI risks, and the areas of misalignment (and material gaps) that exist in their architecture and approaches.   

In this post, we set forth the contextual legal framework driven by the government as a primary stakeholder. As we suggest, extant laws are inadequate. In nearly all instances, they were not designed to govern many of the new realities arising from AI's ubiquitous uptake. This gap between existing laws and decision-makers' need for clarity both motivates and necessitates our research.  

It's generally agreed that as of the writing of this post, the legal framework for AI regulation is sparse. The two possible sources of regulation impacting AI systems are: (a) non-AI-specific laws and regulations (which apply to and shape AI); and (b) AI-specific laws, principally in the form of regulations by adminstrative bodies, legislative acts, and judicial decisions.  

 

Non-AI-specific laws and regulations  

Privacy Laws 

In general, privacy laws constitute the majority of non-AI-specific laws with immediate implications for AI governance. Laws such as the General Data Protection Regulation (GDPR) of the EU and the California Consumer Privacy Act (CCPA) regulate AI and automated decision-making either by (a) allowing users to opt out of profiling and automated decision-making, (b) requiring impact assessments on new modes of data processing, and (c) generally requiring users of personal data to process personal data in accordance with certain standards, including but not limited to data minimization, transparency, and security.  

Anti-Discrimination Laws 

Other sets of non-AI specific laws that nevertheless apply to AI include laws on equal treatment and bias, such as Title VII of the Civil Rights Act of 1964 in the US, and the Equality Act 2010 applicable in the UK, which protect individuals from discrimination based on a range of protected characteristics. These laws apply to algorithmic bias, such as where system outputs are prejudicial to protected persons.  

General Laws 

Other generally applicable and/or industry-specific laws, such as torts, intellectual property, and product liability, in addition to healthcare, insurance, and financial regulations, etc., may also apply to AI applications. Suffice it to say that cognizable legal claims under a wide variety of general laws apply to AI systems. 

Partially AI-Specific Regulations  

It is worth mentioning that in the US, there is a collection of state laws, including the Illinois AI Video Interview Act and New York's Local Law 144 (addressing automated employment decision tools), which occupy a nebulous middle ground, to the extent that they apply specifically to AI/AI-related harms, but only in a highly limited context (i.e., employment).  

Caremark Case 

The Caremark Standard is not a legislative or regulatory act but instead the product of judicial interpretation. The Standard establishes two claims that can be brought against corporate boards: (a) when directors utterly failed to implement any reporting or information system or controls and (b) when directors, having implemented such a system or controls, fail to monitor or oversee its operations, thereby disabling themselves from being informed of risks or problems requiring their attention, and, in both instances, thereby exposing corporations to liability. 

Although not specifically formulated to impact AI systems, Caremark applies to AI risks by recognizing key oversight responsibility at the top rung of corporations, where, incidentally, the material decisions about AI systems impacting society are/should be made.  

Ultimately, these varying laws and regulations apply differently across the AI value chain, as summarized in the table below: 

S/N Type of law Examples Application to AI value chain  
1. Privacy GDPR, CCPA  Model training and live use 
2. Anti-Discrimination Title VII of the Civil Rights Act  Live use
3. General Health reg.  Use in specific scenarios 
4. Partially AI-specific Local Law 144  Use in specific scenarios 
5. Caremark Standard In re Caremark Int'l   Entire value chain, board-focused 

 

AI-specific laws and regulations  

The EU recently recorded a significant AI governance triumph in the form of the EU AI Act. In December 2023, the European Parliament and Council reached a provisional agreement on the AI Act, following months of difficult deliberations. The Act introduces a common regulatory and legal framework for AI based on a graduated risk framework and is the first comprehensive body of law in the world to address AI governance.

Despite the EU's achievement, the vast majority of jurisdictions still do not currently have laws providing for AI governance. In the US, there is no comprehensive federal law governing AI. The Biden Administration recently issued an Executive Order (EO) representing an attempt to corral certain actors in the AI space towards voluntary adoption of responsible AI standards. But, without the heavy force of law, expectations of the EO are at most modest. 

Ultimately, the AI regulatory landscape, at least in the US, is comprised of a patchwork of legislative acts, regulations, and case law with limited utility for navigating emergent AI risks. This paradigm is reflected in many countries around the world and is generally representative of the global landscape of AI regulation. China, Canada, Japan, India, Singapore, Australia, and the UAE have teased specific AI legislation; yet, at this time, most of these countries have only managed to finalize national strategy documents and soft laws that direct AI development and/or socialize knowledge of the promises and perils of the technology. There is a sense that this status quo may not change too much at this stage as governments continue to  prioritize understanding AI risks and maximisation of possible benefits.   

 

We have heard from so many readers of this Business AI Ethics blog – thank you! If you share our passion for this work, we welcome your input and collaboration. Until next week. 

 

The Business AI Ethics research team is part of the Edmond & Lily Safra Center’s ongoing effort to promote the application of ethics in practice. Their research assists business leaders in examining the promise and challenges of AI technologies through an ethical lens. Views expressed in these posts are those of the author(s) and do not imply endorsement by ELSCE.