Skip to content

EU's Response to ChatGPT Indicates Governance Driven by Public Controversy

EU Considerations for Categorizing AI Tools Like ChatGPT as "High Risk" in an Upcoming Bill, Imposing Strict Compliance Requirements. This hasty move, some argue, unduly restricts creativity and suggests the EU may be overreacting.

EU Regulation Through Public Outcry Spotted in ChatGPT Amendment
EU Regulation Through Public Outcry Spotted in ChatGPT Amendment

EU's Response to ChatGPT Indicates Governance Driven by Public Controversy

In the digital landscape, generative AI tools like ChatGPT, GitHub Copilot, Speechmate, Prose Media, and Bloomberg's Brief Analyzer have become indispensable, powering a wide variety of applications. However, the European Union's upcoming AI bill, the AI Act, proposes to categorise these tools as 'high risk,' sparking a debate among stakeholders.

The AI Act adopts a risk-based regulatory framework, dividing AI systems into four categories: unacceptable, high, limited, and minimal/no risk. While there is consensus that certain AI applications—such as those used in critical infrastructure, employment, healthcare, or law enforcement—should be classified as high risk, there is debate about whether large-scale generative AI tools merit this same categorisation.

Critics argue that applying the 'high-risk' label to general-purpose generative AI could place an excessive compliance burden on developers and slow innovation. High-risk systems are subject to strict requirements, including risk management, data quality, documentation, transparency, human oversight, accuracy, and cybersecurity. For multi-use platforms, these obligations could be disproportionate to the actual risk posed by any specific application.

Moreover, generative AI tools are not inherently designed for high-risk use cases. Classifying the tools themselves as high risk risks creating a blanket regulatory approach that may not reflect real-world deployment scenarios. Most generative AI applications, such as customer support chatbots, content creation tools, or educational aids, do not pose the kind of significant, systemic risks to health, safety, or rights that the Act associates with high-risk systems.

Industry stakeholders note that, as currently defined, generative AI does not fall under the high-risk category unless deployed in a high-risk context. The Act itself appears to recognise this distinction, as noted in public guidance and preliminary compliance tools.

Critics also advocate for a more nuanced, application-based approach that reserves the high-risk label for AI systems whose purpose and context clearly justify it, rather than the underlying technology itself. This would ensure regulatory flexibility, making it easier to adapt to new developments in AI.

A summary table illustrates the proposed risk categories for generative AI under the EU AI Act. High-risk systems are subject to strict requirements, while limited-risk systems, such as most generative AI tools, are subject mainly to transparency obligations.

The main arguments against classifying generative AI tools as 'high risk' under the EU AI Act centre on concerns about regulatory overreach, the stifling of innovation, and the lack of clear evidence that such tools inherently pose significant risks comparable to systems used in critical infrastructure or law enforcement.

The debate surrounding the EU AI Act's proposed classification of generative AI tools is far from over. If implemented, the new regulations could have significant implications for the development and use of AI-powered chatbots and other generative AI tools, potentially curbing productivity and creativity while addressing concerns about misinformation and harmful content.

  1. The EU AI Act proposes to categorize large-scale generative AI tools like ChatGPT as 'high risk,' a decision that has sparked a debate about the possible excessive compliance burden and potential slowing of AI innovation.
  2. Critics claim that classifying general-purpose generative AI as high risk could result in disproportionate obligations, creating a blanket regulatory approach that may not accurately reflect real-world use cases.
  3. According to industry stakeholders, the AI Act's classification of generative AI tools as high risk may not be well-justified, as these tools are not inherently designed for high-risk use cases and are mainly used in applications posing minimal risks to health, safety, or rights.
  4. Opponents of classifying generative AI tools as 'high risk' under the EU AI Act argue for a more nuanced approach, reserving the high-risk label for AI systems whose purpose and context clearly warrant it, rather than the underlying technology itself.

Read also:

    Latest