Skip to content

FPF and OneTrust Publish Collaborative Guide and Infographic on Compliance Assessments under the Proposed EU AI Act: A Detailed Walkthrough

Today, FPF in partnership with OneTrust unveil a joint resource on Conformity Assessments under the proposed EU AI Act: A Comprehensive Guide and complementary Infographic. Conformity Assessments serve as a crucial and encompassing accountability mechanism featured in the proposed European...

FPF and OneTrust Unveil Joint Venture for AI Compliance Assessments under the Proposed EU AI Act: A...
FPF and OneTrust Unveil Joint Venture for AI Compliance Assessments under the Proposed EU AI Act: A Detailed Walkthrough & Visual Aid

FPF and OneTrust Publish Collaborative Guide and Infographic on Compliance Assessments under the Proposed EU AI Act: A Detailed Walkthrough

The European Union (EU) is set to introduce stringent regulations for high-risk Artificial Intelligence (AI) systems with the proposed Artificial Intelligence Act (AIA). A new guide and infographic, developed by FPF and OneTrust, provide a comprehensive step-by-step explanation of the Conformity Assessment process required for these high-risk AI systems under the AIA.

The guide is intended for individuals and organisations in the EU, as the AIA aims to regulate AI systems that pose a significant risk to safety, fundamental rights, and public order. The final text of the AIA is expected to be adopted by the end of 2023, with the regulation becoming applicable in late 2025.

The Conformity Assessment process is a crucial accountability tool within the proposed AIA. It is the process of verifying and demonstrating that a high-risk AI system complies with the requirements enumerated under Title III, Chapter 2 of the AIA.

Here's a breakdown of the step-by-step guide for Conformity Assessments under the EU AI Act:

1. **Risk Assessment and Classification** Identify whether the AI system is classified as high-risk per Annex III of the AI Act. High-risk AI systems are subject to mandatory conformity assessments, while low-risk systems have lighter transparency obligations.

2. **Implement a Risk Management System** Develop and maintain a documented risk management process throughout the AI system's lifecycle. This includes continuous evaluation and mitigation of risks related to safety, fundamental rights, and potential discriminatory outcomes.

3. **Data Governance and Quality** Use high-quality, representative, and carefully vetted datasets for training, testing, and validation to minimise biases and errors. Proper data governance is a critical compliance requirement.

4. **Technical Documentation and Transparency** Prepare comprehensive technical documentation and logs to allow traceability of the AI system’s decisions. User instructions and disclosures about the AI’s purpose and limitations are mandatory to ensure transparency.

5. **Human Oversight** Design the AI system to enable effective human oversight, which may include "human-in-the-loop" mechanisms or review procedures to prevent unchecked automated decisions.

6. **Testing for Robustness, Accuracy, and Cybersecurity** Conduct rigorous testing to ensure the AI system is reliable, accurate, robust against manipulation, and secure against cyber threats. Ongoing monitoring and performance verification post-deployment are also required.

7. **Undergo Conformity Assessment by Notified Bodies** A conformity assessment body designated by the notifying authority in the Member State will evaluate the AI system. This third-party assessment involves verifying compliance with the AIA’s requirements before the system is deployed or marketed.

8. **CE Marking and Registration** After successful conformity assessment, the system must be CE marked as compliant and registered in an EU database established for high-risk AI systems to facilitate market surveillance and transparency.

9. **Market Surveillance and Enforcement** National competent authorities, including market surveillance and notifying authorities, monitor compliance post-market and can take enforcement actions if non-compliance is identified.

The Conformity Assessment process covers key aspects such as risk management, data quality, transparency, human oversight, technical robustness, third-party assessment, CE marking and registration, and market surveillance.

This framework ensures that high-risk AI systems meet strict safety, fairness, and fundamental rights standards before entering the EU market, with continuous oversight post-deployment. It also encourages integrating compliance early in the AI development lifecycle to prevent costly retrofitting later.

The draft standardization request that was issued by the European Commission in December 2022 may be amended when the AIA is finally adopted. Organisations preparing for compliance with the AIA's final text can find the guide and infographic to be an essential resource.

  1. The Conformity Assessment process mandated by the European Union's Artificial Intelligence Act aims to ensure high-risk AI systems comply with regulations concerning safety, fundamental rights, and public order.
  2. The guide for Conformity Assessments under the EU AI Act emphasizes the importance of Risk Assessment and Classification, as it determines whether an AI system is subject to stringent transparency obligations.
  3. Prioritizing Data Governance and Quality is crucial in the Conformity Assessment process, as it minimizes biases, errors, and contributes to compliance.
  4. To demonstrate accountability and foster trust, high-risk AI systems under the AIA must provide clear technical documentation and transparency with user instructions and disclosures about their purpose and limitations.
  5. In the Conformity Assessment process, human oversight is essential, as it includes "human-in-the-loop" mechanisms or review procedures to prevent unchecked automated decisions and uphold privacy, security, and resource management standards.

Read also:

    Latest