AI Governance for Boards: Insights From the World Economic Forum

World Economic Forum logo

Article Originally Written for Directors&Boards 

Agility and a Multistakeholder Approach are Essential for Success.

Artificial intelligence (AI) is advancing faster than our ability to govern it. Because AI presents unique risks, “normal” governance no longer applies, and boards will need to incorporate a different governance framework for AI. Findings from the work of The Global AI Council at the World Economic Forum (WEF) can assist board members in developing their own approach to the governance of AI.

Government agencies and academics have invested considerable time into various governance frameworks for AI, but few boards have developed new frameworks to address the unique risks associated with the technology. The accelerated implementation of AI has outpaced traditional business rules and regulations. Boards need ways to ensure that their companies are overseeing their AI-based systems so they are helping their constituents without stifling innovation.

Board directors can use the WEF’s work to educate themselves on critical issues to consider when developing an AI governance framework. To create a governance framework, the WEF used an agile, multistakeholder approach, which businesses would do well to emulate. 

Being agile and accommodating multiple stakeholders may appear to be mutually exclusive. “Agile” signifies the fastest way to create a framework. “Multistakeholder” connotes a fully comprehensive and much slower way to do the same. 

An agile approach can keep pace with exponential technological changes. Such an approach includes rapid experimentation and decision making to establish new ways of governing. Boards should not only move quickly, but also be prepared to change the very framework they’ve created as they learn.

A multistakeholder approach to AI is purposely comprehensive. It includes input from customers, users, technology developers, AI solution owners, vendors, employees, communities and government entities worldwide.

Developing a framework quickly and comprehensively can be done by leveraging the work of others. The WEF’s work is a great place to start. Rather than look at the entire body of work completed by the WEF, let’s examine the WEF’s work on the healthcare sector as an example. Specifically, let’s focus on one application: chatbots in healthcare.

AI Chatbots in Healthcare

All companies want to balance the competitive advantages of AI with the negative consequences of betraying people’s trust in their application. This balance is especially essential in healthcare.

To quote Shobana Kamineni, executive vice-chairperson of Apollo Hospitals, “Healthcare chatbot systems can improve and augment accessibility (reaching to the last mile), enhance effective interactions, deliver care faster and with higher accuracy. However, it has to be safe, maintain users’ privacy and integrity, and be delivered fairly and inclusively.”

The WEF working group included input from technology developers, healthcare providers, the medical community, academia, and government regulators. The working group created a governance framework that includes 10 principles and 75 recommended actions. Highlights include a focus on the following issues:

What Standards do You Have to Measure AI Fairness, Transparency, and Bias? 

Bias can be a serious issue in AI-based systems. Patients can be treated differently and unfairly because of their race or socioeconomic status. Transparency can show why certain decisions were made for particular patients, helping to govern potential bias. 

What is Our Definition of Responsible AI? 

Each company needs to determine its definition of responsible AI. This not only will vary by industry, but also can vary by application, by type of data being used, by constituents impacted and by geographical regulations.

How do we Ensure the Company’s Ethics are Adequately Implemented in Our AI Systems? 

Ethics can touch on the source of data, the type of training, the developers who are coding decision making into the system, and the third-party vendors that may be part of that solution. How do those ethical decisions make their way into the AI-based system?

How are we Increasing Executive Awareness, Understanding, and Prioritization?

Because of the speed of innovation in AI, it’s challenging for board members and executives to keep up with the most critical risks and issues. Both the board and senior leaders should be educated and aligned on the most important issues.

Are we Prepared to Address the Ethical Issues?

AI creates unique competitive advantages, but it will also present new challenges for boards. Boards must be aware of global regulations impacting issues like facial recognition and surveillance. It is also essential for boards to be prepared for the impact disinformation and deep fakes have on the company. 

What is AI’s Impact on Jobs? 

Is there potential for a wider inequality gap? In most cases, it is tasks, not jobs, that will be eliminated by AI. Most workers will find this to be a welcome change. However, some jobs will be at risk, such as truck drivers, food service workers, receptionists, and customer service agents.

What is AI’s Impact on Climate Change? 

AI can be used to reduce carbon impact, but it may also have a negative impact because of the massive computing requirements of many of today’s AI systems. 

What is AI’s Impact on Consumer Protection? 

Given that AI requires massive amounts of data, components of consumer protection are covered by data-protection regulations. As AI makes an increasingly larger impact, more regulations are being developed. It’s the board’s responsibility to monitor these regulations and stay in front of them.

The first step in developing a governance framework is gaining a full understanding of the company’s work in AI, usually through a request to the CIO. Many boards will not be aware of the breadth of AI work being done within their company. It will take some time for the CIO to discover the work being done, explore planned systems and help assess risk factors.

Before the board has a full picture of the AI work being performed within the company, it can begin building out the governance framework using multistakeholder efforts and an agile governance methodology.

Contact Glenn Gow “The AI Guy” For Boards Today!

Glenn Gow is “the AI guy”. He is a former CEO and has been a board member of four companies. He is currently a board member and CEO coach. Follow him on LinkedIn or email him at glenn@glenngow.com.