Skip to content

Understanding AI's Implications for Directors Before It's Irreversible

AI's emergence demands a novel type of board management, characterized by a combination of strategic acumen and a hands-on grasp of technology.

Important Insights on AI for Directors Before Time Runs Out
Important Insights on AI for Directors Before Time Runs Out

Understanding AI's Implications for Directors Before It's Irreversible

In the rapidly evolving world of artificial intelligence (AI), companies are grappling with a complex and ever-changing regulatory landscape. This article explores key international approaches to AI regulation, accountability for companies, and the implications for businesses.

Recent developments include a federal judge granting preliminary class-action status to a lawsuit against Workday, alleging that the company's AI-based applicant recommendation system discriminated against workers age 40 and older, potentially violating federal age discrimination laws. Meanwhile, OpenAI, the maker of ChatGPT, faces legal challenges on multiple fronts. In December 2024, Italy's data protection authority fined OpenAI 15 million euros ($15.58 million) for using personal data by the generative artificial intelligence application. Additionally, OpenAI was sued by German collection society and licensing body GEMA due to alleged unlicensed reproduction of song lyrics in November 2024.

The United States government, in July 2025, released America’s AI Action Plan, emphasizing deregulation to accelerate AI innovation, investment in infrastructure and workforce development, and strengthening export controls to allied countries. Despite the emphasis on deregulation, companies must watch evolving policies closely and adapt, as future federal AI regulations and oversight authorities are anticipated.

In contrast, China announced a 13-point Action Plan for Global AI Governance in July 2025, aiming to enhance infrastructure, data security, open ecosystems, and international cooperation to bridge global digital divides. China proposed a global AI cooperation organization to coordinate AI governance internationally, reflecting a governance approach focused on practical risk management and cross-border collaboration.

Other regions, such as Turkey and the UAE, have published sector-specific AI regulatory guidelines and are advancing legislative frameworks. The UK favors flexible, sector-specific AI regulation rather than comprehensive legislation. The UN encourages national regulatory frameworks to support trustworthy AI systems, promoting a global consensus on safe, secure AI.

Businesses are expected to implement explainable AI systems, allowing users, auditors, and regulators to trace AI decision-making processes. This facilitates transparency and helps meet regulatory requirements for responsible AI deployment. With the U.S. focus on deregulation but increased scrutiny on export controls and AI safety, companies face a dynamic compliance environment requiring active monitoring of multi-jurisdictional laws and adapting compliance approaches accordingly.

International cooperation efforts, such as those proposed by China and supported by the UN, signal growing global expectations for companies to engage in responsible AI development and data security best practices.

The regulatory landscape is active and rapidly developing, requiring businesses to stay vigilant and adaptable. Analysis by McKinsey published in 2024 found that companies with leading digital and AI capabilities outperform laggards by two to six times on total shareholder returns (TSR) across every sector analyzed. As of 2024, 78% of organizations reported using AI, up from 55% the year before, according to the Stanford Institute for Human-Centered AI's 2025 "Artificial Intelligence Index Report."

Directors should ask questions such as: Are we generating measurable value from AI? What is our strategy for scaling AI across the enterprise? Do we have the right data, systems, and talent to support AI at scale?

In March 2025, a U.S. federal judge denied in part Cigna's bid to dismiss claims by six named plaintiffs who seek to represent other beneficiaries of health plans that are administered by Cigna who were denied coverage based on the insurer's use of its PxDx algorithm. This case underscores the need for companies to ensure their AI systems are fair, transparent, and compliant with relevant regulations.

In conclusion, navigating the global AI regulatory landscape requires a proactive, adaptable approach. Companies must stay informed, implement explainable AI, and comply with emerging laws while engaging in responsible AI development and data security best practices.

  1. In the legal dispute between Cigna and the six named plaintiffs, the judge's decision in March 2025 highlights the significance of ensuring that AI systems are fair, transparent, and compliant with relevant regulations, especially for companies that use AI in health plan administration.
  2. The recent fine imposed by Italy's data protection authority on OpenAI, the creators of ChatGPT, for using personal data without permission serves as a reminder for companies like Todd James' business to prioritize data security and best practices in AI development to avoid potential legal challenges and fines in various international jurisdictions.

Read also:

    Latest