Skip to content

How should board lead the AI superpowers?

As well as contacting you about your brochure request, weโ€™d like to tell you about other events and opportunities to find out more about our courses and studying at Henley. Please let us know below if you are happy for us to do so:(Required)
The personal information you supply on this form will be used to help us respond to your request, for quality assurance and for data analytics purposes. Your personal data will not be sold to any organisation, and will not be shared with any organisations outside the university apart from those that help us to provide this service or unless required by law. The information that has been provided in this form will be treated in accordance with the General Data Protection Regulation (2016) and all applicable data protection laws. Please refer to our privacy policy for more information. If, at some stage, you wish to be removed from our database, please advise us by emailing to
This field is for validation purposes and should be left unchanged.

Until recently many organisations have allowed technology to be the primary responsibility of the department, with in-house or external IT services designated for nominal support.  The advent of AI has changed this traditional stance forever. Now it is the board who must decide how newfound AI superpowers are best managed and implemented.

AI is already becoming embedded within some boardrooms. Directors are recognising the benefits of its leverage to track the capital allocation patterns of competitors, identify opportunities for increased R&D, uncover alternative business strategies, and enter new markets with key products. 

All of this helps keep the company several steps ahead of its rivals, offering market share growth at a time when society is facing multiple challenges leading to uncertainty across domestic, global and geopolitical spans.

These threats range from climate change and growing cybersecurity risks, through to increasing demands for social, economic and political justice. Boards have a chance to put AI to effective use in developing new strategies that anticipate and overcome many of these elements.

However, to realise such advantage, directors have to take responsibility for addressing how AI is used. The core role of directors is to make decisions but, as decision-making is inevitably a collective exercise, this process can become overly complex. So where to begin?

Liability and corporate responsibility

Boards should be aware of one of most critical areas impacting their future โ€“ the shift of liability through AI adoption. 

The central question is โ€˜can a company unwittingly assume greater liability by using AI to enhance the usefulness of a product or service?โ€™

The extent and frequency of this change in financial services remains uncertain. However, in the case of motor insurance, it is increasingly agreed that the liability for autonomous vehicles will rest with the manufacturer, rather than the driver. 

This is a crucial point of difference for boards to consider while navigating the evolving nature of AI. While ethical decision-making has always been a part of business, AI introduces a new layer of complexity. The fact a machine is capable of performing a task doesn’t necessarily mean it should.

To complicate matters further AIโ€™s advent increases an already overcrowded boardroom agenda. Leaders will now have to confront ethical, accountability, transparency and liability issues, all brough to the surface by a new and often poorly-understood technology.

These challenges are forcing organisations to undergo significant changes. 

Additionally, there is a concern that machines may learn inappropriate behaviour from past human decisions. It can be challenging to determine what is right, wrong, or just plain creepy in the era of models and algorithms. 

Overseeing the ethics of AI

Overseeing AI in action creates new responsibilities and roles, making accountability anything but straightforward. The difference between right and wrong is becoming more nuanced, particularly because there is no societal agreement on what constitutes ethical AI usage.

Companies naturally strive to remain secretive and maintain a competitive edge. Nonetheless, to be at the forefront of the market organisations must be transparent when using AI. Customers need and want to know when and how machines are involved in making decisions that affect them, or are being made on their behalf. 

It is crucial to clearly and explicitly communicate which aspects of customersโ€™ personal data are being used in AI systems and consent is a non-negotiable requirement. 

Boards may need to be more deeply involved in determining the approach and level of detail required for transparency, which in turn will reflect the values of their organisations.

The responsibility for determining what constitutes a โ€˜sufficientโ€™ explanation ultimately lies with the board, who must take a firm stance on what this means for themselves and other stakeholders.

Although many directors would prefer to avoid taking the risk of disagreeing with ultra-intelligent AI machines, it is crucial for board members to question the validity of black box arguments and have the confidence to demand an explanation of how specific AI algorithms work.

Professori Andrew Kakabadse - Henley Business School Finland

Andrew Kakabadse
Professor of Governance and Leadership at Henley Business School.

Andrew has undertaken global studies spanning over 20,000 organisations (in the private, public and third sector) and 41 countries. His research focuses on the areas of board performance, governance, leadership and policy. He has published 45 books and over 250 scholarly articles, including bestselling books The Politics of ManagementWorking in OrganisationsThe Success Formula and Leadership Intelligence: The 5Qs. Andrew has consulted among others for the British, Irish, Australian and Saudi Arabian governments, as well as Bank of America, BMW, Lufthansa, Swedish Post and numerous other organisations. He has acted as an advisor to several UN agencies, the World Bank, charities, and health and police organisations.

Jennika Rantanen - Henley Business School Finland

Jennika Rantanen

Related content

Finding common ground leads to successful teamwork – GISV South Africa

  • 21 Dec 2023
A Henley alumni and a Henley professor share their experience during the EMBA Global Immersion Study Visit in South Africa.

Learning about a world beyond big corporations – GISV North America

  • 21 Dec 2023
A Henley alumni and a Henley professor share their experience during the EMBA Global Immersion Study Visit in North America.

Find interesting new people among the Henley alumni

  • 18 Dec 2023
Find interesting new people among the Henley alumni. No agenda, no formality, no suit, or tie needed โ€“ just an open mind!