Post #3 The AI Ethics Landscape: Government, Academia, and Business Approaches

In our previous blog posts, we have explored the urgency and critical importance of developing an applied (practical) AI ethics framework for business. We are motivated by our mission to design, develop, and deploy an applied ethical framework for the business use of AI. In designing this framework—as a more pragmatic response to the unique ethical challenges presented by AI—we need to first examine current practices and identify what works and, importantly, what is missing.  

The ethical use of AI in the business community is a multifaceted challenge that involves input from various stakeholders, including academia, governments, NGOs / civil society, international development organizations, and businesses. Various stakeholders have attempted to develop responses to the primary ethical concerns inherent in AI systems. Some of these include: 

1. Academic Research  

Research institutions and think tanks are actively studying the ethical implications of AI and providing policy recommendations. They can serve as an independent and unbiased check on private sector activities and provide the theoretical underpinnings necessary to understand the ethical implications of AI. Yet given the rapid pace of AI development, it is hard for academic research to keep up with new ethical challenges. Further, given the vast array of commercial opportunities, it’s unlikely that academics will ever be in a position to materially impact business AI development. As such, there remains a sizable gap between academic research and practical application and influence in industry settings, limiting the scale and scope of the research and its impact. To be clear, academic research can be an important enabler but is alone proving insufficient in moving the business needle. 
 

2. External Regulations from Governing Bodies 

Many countries are developing national strategies and guidelines and enacting laws to regulate the use of AI. The US and the European Union (EU) have drafted regulations such as the Algorithmic and Accountability Act (US, introduced), which addresses bias in algorithms, and the General Data Protection Regulation (EU, enacted), which aims to protect the security and privacy of consumer data. The UK, China, Japan, Singapore, Australia, and India have each released AI ethics guidelines or frameworks, and some have implemented more concrete measures in specific areas like privacy and data protection. An increasing number of countries have developed (or are developing) guidelines, strategies, and regulations regarding AI. However, given the slow pace of legislative processes, the limited impact of more flexible non-legislative guidelines, standards, and executive actions, and the comparative pace of technical and functional advancement in the AI space, government action alone may encounter difficulties in regulating AI.  

3. Current Responses from Corporates 

Faced with mounting pressure from ethical AI challenges and potential risks, businesses must take the initiative to self-scrutinize and regulate their own practices. Most companies have defined and publicized their ethical principles and guidelines, some released in the form of mission statements —and many now including an express adoption of responsible/ethical AI principles. However, as we stated in a previous blog post, the mere adoption of ethical principles isn’t enough; businesses must go beyond and move to action if they hope to produce material, practical, outcomes. Some companies have indeed gone further by establishing ethics boards with internal and/or external experts equipped with specialized skills and diverse backgrounds, including technologists, legal/compliance experts, anthropologists, ethicists, and business leaders. Outside of the ethics board, some companies are also raising ethical awareness among their employees by providing ethics training across the enterprise. Lastly, some companies have formed alliances to establish cooperative industry initiatives to promote ethical AI development. One example is the Partnership on AI, initiated by big-tech (e.g., Apple, Amazon, Alphabet, Meta, IBM, and Microsoft) and now materially comprised of leading NGOs. 

While both AI developers and the businesses that employ AI are increasingly recognizing their obligation to comply with external regulations and the responsibility to build trust with consumers and the broader public, it is hard to imagine much of the current effort extending beyond the development of a shared understanding of AI safety and risks, into a concrete impact on ethical AI usage in business. We believe the business community needs an applied, practical, ethical AI framework and associated guidance to supplement and complement existing approaches and provide the conduit from conception to action. 

*** 

Our Business AI Ethics research initiative at ELSCE recognizes the vital importance of addressing AI ethics within the business context. As AI continues to evolve and shape the business landscape, our mission is to equip business leaders with practical, innovative decision-making frameworks and guidance that empower them to navigate the ethical complexities of AI. Our goal is clear – to ensure that responsible AI practices become an integral part of business operations. 

We have heard from so many readers of this Business AI Ethics blog – thank you! If you share our passion for this work, we welcome your input and collaboration. Until next week. 

The Business AI Ethics research team is part of the Edmond & Lily Safra Center’s ongoing effort to promote the application of ethics in practice. Their research assists business leaders in examining the promise and challenges of AI technologies through an ethical lens. Views expressed in these posts are those of the author(s) and do not imply endorsement by ELSCE.