philippe-logo

AI on the board of directors: growing opportunities, but also new forms of director responsibility

AI is increasingly permeating the board of directors, from providing administrative support to making autonomous decisions. While the benefits are numerous, the use of AI also raises new questions regarding directors’ liability, as they remain responsible for errors, even when these stem from over-reliance on AI systems.

 

Digital decision-making has been a reality on the board of directors for several decades, but the recent breakthrough in artificial intelligence (AI) is taking this evolution to the next level. Today, the technology is being used as a strategic advisor for investment decisions (such as merger and acquisition analyses), as a system for auditing annual accounts, as a tool for monitoring corporate performance, and even in the form of “robo-directors,” as exemplified by the well-known VITAL. The question is no longer whether AI will find its place in corporate governance, but how far its integration will go.

Three levels of AI usage

AI can play several roles within a board of directors:

 Intelligent assistance : the systems primarily support practical or administrative tasks. This includes intelligent document management, accounting and reporting programs, as well as speech recognition and text prediction.

Augmented intelligence : AI enhances administrator assessments through its analytics. This includes assessment tasks such as classification, clustering, simulation, chatbots, robo-advisors, etc.

Autonomous intelligence : the most advanced form, where the AI ​​system receives genuine decision-making power and operates largely autonomously. While still exceptional today, the technology is approaching an operational level where human intervention becomes limited.

The advantages: speed, efficiency and independence

The use of AI can significantly streamline decision-making. It enables faster information processing, reduces the risk of groupthink, and limits the influence of internal relationships or informal pressures. The independence of the governing body can also be strengthened: an AI system has no conflict of interest. It is therefore logical that a director would be inclined to use it.

Directors’ responsibility in the age of AI

However, a director must consider the implications of using AI on their duties and liability. In accordance with Articles 2:56 and 2:51 of the Belgian Companies and Associations Code (CSA), directors – whether members of the board of directors, managing directors, or any person with actual, even de facto, management power – are liable for any misconduct committed in the performance of their duties.

The liability provided for in these provisions covers several types of misconduct. A director can be held liable for any breach of a provision of the CSA. They can also be held liable for any misconduct resulting from the director’s behavior that is not consistent with that of a reasonably prudent and diligent director.

Concrete examples illustrate where things can go wrong. A manager who relies solely on an AI tool to verify that a subcontractor has the appropriate permissions, and then enters into a contract without further human verification, can be held liable if the information turns out to be incorrect. The same applies if sensitive company data is introduced unprotected into a generic AI tool. In both cases, a prudent administrator would have taken precautionary measures—and this remains the standard.

Each director must therefore be careful not to place blind trust in AI, at the risk of being held liable. Current case law is strict regarding directors who delegate their tasks to third parties; the logic will be the same with AI. We can expect numerous court decisions to address this issue.

A new responsibility: training and awareness

The rise of AI creates not only technological obligations, but also organizational ones. A central question arises: what training is needed for employees to use AI tools?

With the entry into force of the European AI Regulation, organizations are obligated to ensure a sufficient level of “AI literacy” among their staff. In practical terms, managers must ensure that their employees receive adequate and sufficient training.

This also falls under the corporate duty of care: when AI systems are introduced, staff must be adequately trained to use them safely, securely, and in compliance with the law. This includes training in data security, proper encryption methods, the risks of hallucinations, and the need for human oversight in critical decisions. A company that fails to provide training exposes itself to a significant risk of non-compliance—with potential liability for both the organization and its directors.

Conclusion

AI can enrich and strengthen the board of directors, but it doesn’t change the fact that ultimate responsibility remains with human directors. Technology can assist, but will never replace the diligence and prudence of good governance.

See our latest News

Alicea Castellanos

NEW YORK BOI REPORTING REQUIREMENT NOW JUST APPLIES TO FO...

January 29, 2026

Mark Benton

INBOUND BUSINESS IN KOREA: WHY THE HARDEST BARRIERS ARE N...

January 28, 2026

admin

We are pleased to announce that Esther Susin Carrasco has...

January 28, 2026

admin

We are pleased to announce that Adam E. Urbanczyk has rec...

January 28, 2026

admin

We are pleased to announce that Charles F. McCormick has ...

January 28, 2026

Charles Krugel

Can You Build AI Compliance into a Competitive Advantage?

January 27, 2026

Szabolcs Toth

New Era of Transparency for Maltese Private Trust Companies

January 26, 2026

Denis Philippe

AI on the board of directors: growing opportunities, but ...

January 22, 2026

Duarte G Henriques

Duarte G. Henriques Awarded Fellowship of the Chartered I...

January 22, 2026

Saika Alam

Considering Surrogacy? Key Lessons from J (A Child) (Surr...

January 22, 2026