Post #6: The Caremark Rule and Board Level AI Risk Management 

In a previous essay, we outlined the relatively sparse legal and regulatory global landscape for AI as driven by government/policymakers as key stakeholders. We noted in this previous piece that the Caremark standard governs board oversight of company risks, with direct implications for a company’s AI program. See In re Caremark Intern, Inc. Derivative Litigation 698 A.2d 959 (1996). We contended that the principles enunciated in Caremark have far-reaching implications for a US board. 

In this essay, we further examine the Caremark rule and its progeny. We explore the import and scope of this standard and highlight its unique importance to our design of a practical ethical framework for AI governance, that is aligned with the law, legitimate civil society interests, and business priorities. 

The Caremark decision serves as an extension of directors’ fiduciary duties  

Under Delaware law (and, generally under US corporation law), directors owe their corporations two primary fiduciary duties: (a) the duty of care, and (b) the duty of loyalty. See Smith v. Van Gorkom, 488 A.2d 858 (Del. 1985). However, in Caremark, the Delaware Court of Chancery extended the directors’ duty of loyalty, recognizing that directors have a duty to implement and monitor corporate compliance systems in good faith. This is what has come to be known as the Caremark Rule. 

Applying Caremark - what are the core principles? 

Caremark has been interpreted by the courts over the years as creating two claims which can be brought against corporate boards. See Stone v. Ritter, 911 A.2d 362 (Del. 2006). The first is a claim that directors failed to implement any reporting or information system or controls which exposed the corporation to legal liability. This claim has been commonly referred to as the “Information System Control” prong.  

The second is a claim that the board, having implemented such a system or controls, consciously failed to monitor or oversee its operations, thus disabling themselves from being informed of risks or red flags requiring their attention. This claim is commonly referred to as the “Red Flags” prong. See In re Caremark Intern, Inc. Derivative Litigation 698 A.2d 959 (1996). 

The two prongs of the Caremark rule are separate and mutually exclusive; satisfaction of one prong does not necessarily mean that the board satisfied the other prong. See Teamsters Local 443 Health Servs. & Ins. Plan v. Chou, C.A. No. 2019-0816-SG (Del. Ch. 2020). 

The “Mission-Critical” Standard 

In 2019, the Delaware Supreme Court in Marchand v. Barnhill established a heightened oversight responsibility for mission-critical operations, further expanding the Caremark Rule. See Marchand v. Barnhill, 212 A.3d 805 (Del. 2019). For these mission-critical operations, the board's oversight function must be “more rigorously exercised.” See In re Clovis Oncology, Inc. Derivative Litigation, C.A. No. 2017-0222-JRS (Del. 2019). This is true under both prongs of the Caremark rule. 

There is no clear yardstick articulated by the Delaware courts for ascertaining what constitutes a mission-critical operational risk or failure. However, from a close reading of the cases, some instances the courts may consider mission-critical include: 

a. Risks arising from compliance with positive law, which pertain to the core business of a company operating in a single segment and where a failure to comply may impair the company’s ability to do business. See Marchand v. Barnhill, 212 A.3d 805 (Del. Ch. 2019); See also In re Boeing Co. Derivative Litigation, C.A. No. 2019-0907-MTZ (Del. Ch. 2021). 

b. Risks arising from compliance with positive law, which pertain to one of the key operations of a company operating in multiple segments, and which are directly contrary to the central purpose of the company’s business and impair the company’s ability to do business. See Teamsters Local 443 Health Servs. & Ins. Plan v. Chou, C.A. No. 2019-0816-SG (Del. Ch. 2020); See also In re Clovis Oncology, Inc. Derivative Litigation, C.A. No. 2017-0222-JRS (Del. Ch. 2019). 

c. Risks that do not arise from compliance with positive law, but which pertain to such components of the business of a company that is so critical, with regard to (a) extraordinary reliance by the company on such component, (b) the ubiquity of the risk, and (c) the existence of non-binding soft-law, industry regulations and rules which indicate a duty to act on such risks. See Constr. Indus. Laborers Pension Fund v. Bingle, C.A. No. 2021-0494-SG (Del. Ch.2022). 


The Caremark Standard applied to AI 

Caremark claims may be successfully argued where a fact pattern exists evidencing sustained or systematic failure of oversight by a board over AI-related harms. There are several possible contexts where AI risks may yield such a fact pattern, including but not limited to (a) where AI is the very essence of the company’s business, (b) where AI is a central component of the company’s core operations, (c) where AI is part of or supports the company’s operations but in a high-risk area, and (d) routine AI use that results in foreseeable harm. 

Courts may also conclude that AI-related risks meet the mission-critical standard described above. For that to happen, to recite the language of the court in Constr. Indus. Laborers Pension Fund, one may have to “envision an extreme hypothetical.” See Constr. Indus. Laborers Pension Fund v. Bingle, C.A. No. 2021-0494-SG (Del. Ch.2022). Yet, given the potential for an oversized impact from AI (including the extreme nature of certain risks), it is certainly plausible that an AI application may constitute such an “extreme hypothetical” and meet the mission-critical standard, thereby requiring heightened board oversight.   

Delaware courts have already considered Caremark claims brought against boards for their alleged failure to oversee cybersecurity risks. In Constr. Indus. Laborers Pension Fund, the court acknowledged that the SolarWinds board failed to prevent a large corporate trauma and cybersecurity is considered mission-critical for online service providers.  

Pertaining to the obligations of corporate boards, the Caremark standard, derived from principles of equity favored by the Delaware Chancery Court, provides a baseline for the formulation of a practical ethical framework for governing AI risks. This may also be true for members of the C-suite, as the Caremark standard has been held to apply to corporate officers. See In re McDonald’s Corp. Stockholder Derivative Litigation, Delaware Court of Chancery C.A. No. 2021-0324-JTL (Del. 2023). 

We have heard from so many readers of this Business AI Ethics blog – thank you! If you share our passion for this work, we welcome your input and collaboration. Until next week.

The Business AI Ethics research team is part of the Edmond & Lily Safra Center’s ongoing effort to promote the application of ethics in practice. Their research assists business leaders in examining the promise and challenges of AI technologies through an ethical lens. Views expressed in these posts are those of the author(s) and do not imply endorsement by ELSCE.