Post #7: The External Forces Influencing Business AI Governance 

This essay outlines the external forces influencing the responsible and ethical design, development, and deployment of AI systems by businesses. Companies are moving fast to launch generative AI applications to remain competitive in their fields and produce innovations relevant to their customers and other key stakeholders. AI-fueled transformation is happening across all sectors at breakneck speed, with new applications outpacing associated governance protocols. As discussed in previous essays, AI is an incredibly powerful technology that yields risks that can be material for both the enterprise and society at large. We believe it’s imperative that business leaders understand the ethical risks posed by AI and take appropriate actions.  

We observe a set of influencers from outside the enterprise that are shaping a company’s responsible and ethical AI program. These influencers are already making an impact; it’s imperative that companies have a solid understanding of these external forces to effectively govern their AI programs and manage risks appropriately.  

We categorize and further explain these external influencers here: 

Societal Guardians 

This category encompasses a diverse array of advocates and organizations working to promote ethical practices and protect human and societal interests in the proliferation of AI technologies. It includes regulatory and policymaking actors, NGOs, multilateral organizations, as well as academic and research institutions / think tanks. Together, these influencers play a pivotal role in shaping the ethical landscape of AI governance, and their impact is increasingly being felt by senior business leaders. 

Government regulators and policymakers are the primary architects of legal frameworks governing AI technologies. Notably, they establish enforceable regulations aimed at ensuring that AI systems operate within ethical boundaries and do not pose material risks to individuals or society at large. By setting clear legal mandates and enforcing compliance, regulators are influencing business decision-making, and contribute to building societal trust and confidence in AI technologies while safeguarding against a multitude of potential harms. 

Multilateral organizations (such as the United Nations and The World Bank Group) and a plethora of NGOs operate on the frontline of ethical advocacy, championing human rights (such as personal privacy) and transparency / explainability in AI design, development, and deployment. These organizations monitor industry practices, raise awareness about material risks, and advocate for policies that prioritize human flourishing. Through research, advocacy campaigns, and public engagement, they exert significant influence on corporate boardrooms and the C-suite by their shaping of societal perceptions and expectations surrounding AI ethics. 

Academic and research institutions/think tanks play a crucial role in advancing our understanding of AI ethics and developing frameworks for responsible AI governance. Through their extensive research, they explore ethical dilemmas, assess the societal impact of AI technologies, and propose solutions for mitigating risks. By producing cutting-edge research and providing much needed expertise, these institutions contribute to the development of ethical norms and best practices in AI governance, thereby influencing senior business leaders. 

The Protectors 

The Protectors category comprises a diverse set of entities committed to safeguarding the interests and rights of certain non-equity stakeholders with a keen interest in AI governance. Such organizations include advocacy groups, representatives of various stakeholders such as customers, employees, and community members, and government agencies / watchdog organizations protecting consumer rights and interests. Together, these organizations serve as vigilant guardians, advocating for transparency, accountability, and ethical conduct in the development and deployment of AI technologies, and are influencing senior business leader decision-making. 

Representatives play a crucial role in amplifying the voices of non-equity stakeholders and advocating for their rights and interests in AI governance. They champion causes related to privacy, data protection, algorithmic fairness, and the ethical use of AI, ensuring that the concerns and perspectives of the stakeholders they represent are appropriately considered. 

Customers, as key stakeholders in the adoption and use of AI-powered business models, products, and services, rely on advocacy groups and representatives to protect their rights and interests. Typically, through their representatives, these stakeholders advocate for transparency in AI systems, and for example, ensuring that customers are informed about how their data is collected, used, and processed. They also push for fair and accountable AI algorithms that uphold consumer privacy and prevent discriminatory outcomes. 

Employees similarly rely on advocacy groups and representatives to safeguard their rights and well-being in the age of AI. These stakeholders (e.g., labor unions) advocate for fair labor practices, including ethical treatment, non-discrimination, and job security in the context of AI-driven automation and workforce transformation. They may also demand transparency and accountability in AI-powered workplace systems, ensuring that employees are not subject to biased or unfair algorithmic decisions. 

Community representatives, acting as local advocates and watchdogs, play a critical role in ensuring that AI technologies benefit communities and address their unique needs and concerns. These stakeholders advocate for the responsible AI applications, and engage with policymakers, industry stakeholders, and community members to shape AI policies and initiatives that reflect the values and priorities of local communities. Businesses rely on the strength and support of the communities in which they operate. These community representatives are increasingly demanding responsible AI systems, thereby influencing business leadership. 

Government agencies and watchdog organizations may also serve as guardians of consumer rights and interests, overseeing regulatory compliance and enforcement in AI governance. These entities play a crucial role in conducting investigations and enforcing regulations to ensure that AI systems adhere to ethical principles and legal requirements. They monitor industry practices, investigate complaints, and impose penalties on non-compliant entities, deterring unethical behavior and protecting consumers from harm. 

Investor Custodians 

Investor Custodians represent a vital category of influencers tasked with upholding shareholder interests in the governance of AI. This category encompasses institutional investors, proxy advisory firms, rating agencies, and in the future perhaps also AI-specific guidance from stock exchanges. Together, these stakeholders shape corporate behavior and drive accountability in AI-related decision-making. 

Institutional investors, as significant shareholders in publicly traded companies, wield substantial influence over corporate governance programs, including those materially impacted by AI. These investors, which include pension funds, mutual funds, and other large investment firms, often hold diversified portfolios comprising shares in numerous companies across various sectors. As stewards of shareholder capital, institutional investors have a vested interest in ensuring that companies adopt responsible AI practices that enhance long-term shareholder value while mitigating potential risks. We have observed early guidance on AI expectations from such investors, including statements issued by Legal and General and Norges Bank. We anticipate that other institutional investors will similarly publish AI governance expectations aimed at their investee companies. 

Proxy advisory firms and rating agencies serve as independent evaluators and influencers in corporate governance. Proxy advisory firms provide voting recommendations to institutional investors on matters such as executive compensation, board composition, and shareholder proposals, including those related to AI governance. Rating agencies assess companies' ESG (environmental, social, and governance) performance, including their approach to AI ethics and risk management, providing investors with valuable insights into companies' reputational and other risks. 

While stock exchanges currently lack specific guidance on AI governance, there is growing recognition of the need for AI-specific disclosure requirements to enhance transparency and accountability in capital markets. In the future, stock exchanges may play a more active role in promoting best practices in AI governance by introducing guidelines or listing standards that require companies to disclose information related to their AI strategies, risks, and ethical considerations.  

Technology Pioneers 

The Technology Pioneers category comprises influential entities at the forefront of innovation and technical AI development. This category includes technology providers and industry associations/standards bodies, both of which play pivotal roles in shaping the future of AI and its impact on society and industry – including individual company AI programs. 

Technology providers are at the heart of the AI ecosystem, driving innovation, developing cutting-edge solutions, and providing access to AI technologies across various sectors. These companies, which range from the established tech giants to emerging startups, develop AI algorithms, systems, and applications that power a wide array of products and services, from virtual assistants and autonomous vehicles to predictive analytics and legal technology systems for contract management.  

Industry associations and standards bodies serve as key stakeholders in the development and dissemination of best practices, guidelines, and technical standards governing AI adoption and deployment. These organizations address common challenges and promote technology interoperability. Through the establishment of standards, industry associations and standards bodies help ensure ethical and responsible AI development and usage across diverse sectors and geographies. 

Discourse Architects 

Discourse Architects encompasses influential entities that play a key role in shaping public perception and discourse surrounding AI. This category includes media/news outlets, journalists, reporters covering AI-related topics, and opinion writers / editorial boards. Together, these influencers shape narratives, disseminate information, and drive discussions that impact how AI is perceived and regulated by society, thereby influencing business AI decision-making and corporate strategies. 

Media outlets serve as primary channels for the dissemination of information and news related to AI developments, breakthroughs, applications (and controversies). Through news articles, feature stories, investigative reports, and opinion pieces, media outlets provide insights into the latest advancements in AI technology, the societal implications of AI adoption, and the ethical considerations surrounding AI development and deployment.  

Further, journalists and reporters covering AI-related topics play a critical role in investigating, analyzing, and reporting on AI-related developments. They conduct in-depth research, interviews with experts, and on-the-ground reporting to uncover stories and shed light on the ethical, social, and economic implications of AI.  

Opinion writers and editorial boards provide platforms for informed commentary, analysis, and debate on AI-related issues, offering diverse perspectives and insights into the ethical and societal dimensions of AI. Through opinion pieces, editorials, and op-eds, these influencers stimulate dialogue, challenge prevailing narratives, and advocate for policy changes or regulatory reforms to address emerging AI challenges and risks. 

*** 

The dynamic interplay between external influencers and business AI governance underscores the complex ethical landscape surrounding AI adoption. By understanding and engaging with these diverse stakeholders, companies can navigate the AI landscape with foresight and responsibility, ensuring that their AI systems prioritize ethical considerations and safeguard societal interests. 

We have heard from so many readers of this Business AI Ethics blog – thank you! If you share our passion for this work, we welcome your input and collaboration. Until next week. 

The Business AI Ethics research team is part of the Edmond & Lily Safra Center’s ongoing effort to promote the application of ethics in practice. Their research assists business leaders in examining the promise and challenges of AI technologies through an ethical lens. Views expressed in these posts are those of the author(s) and do not imply endorsement by ELSCE.