The Artificial Intelligence Transparency Requirement: A Closer Look at AI Watermarking in the EU
Mandatory AI Watermarking Provision in AI Act Undermines Transparency Aim
By 2024, the EU's AI Act demands AI system providers to clearly identify their outputs as AI-generated. This labelling mandate aims to help users distinguish AI-generated content from human-created works, minimizing concerns about deepfakes and misinformation. However, a practical approach to achieving this - watermarking - may present difficulties for certain media types. With the EU's AI Office enforcing the AI Act, it's essential to assess the practicalities of AI watermarking to avoid undue burdens on AI providers.
AI watermarking is the process of embedding a distinctive signature, or watermark, into AI-generated content like text, audio, or images. This label helps users distinguish AI-generated content within the sea of digital data. Sometimes, these watermarks are undetectable, such as imperceptible changes to an image that are not visible to the naked eye. Other instances may involve noticeable watermarks, like visual symbols superimposed on images. Ideally, watermarks should resist tampering, ensuring they remain even if the content is modified.
The AI Act stipulates that providers of general-purpose AI systems must label their output in a machine-readable format and make it detectable as AI-generated or manipulated. Moreover, they must ensure their technical solutions are effective, interoperable, robust, and reliable, subject to technological feasibility. However, harmonizing these objectives with AI watermarking is challenging due to the trade-offs between properties such as robustness, interoperability, and reliability. For instance, enhancing a watermark's robustness usually necessitates more prominent changes to the content, which may compromises content quality. Interoperability and reliability may conflict as well.
The lack of standardization in AI watermarking technologies further complicates the situation. A watermark created by one system may not be readable by another, as developers experiment with various watermarking techniques to find a reliable solution.
Some policymakers hail AI watermarking as a universal solution for labelling content across various media types. For example, EU Commissioner for Internal Market Thierry Breton declared, "[The European Parliament, the Council, and the Commission share] a common understanding on the need for transparency for generative artificial intelligence. To be clear, this involves identifying what is created by generative intelligence (images, videos, texts) by adding digital watermarking." However, these optimistic views may overlook the limitations and challenges of AI watermarking technology.
A study, for instance, suggests that tampering with or removing watermarks in images is relatively straightforward, while effectively watermarking text may not be achievable. Other analyses, such as those by the European Parliamentary Research Service, point out that state-of-the-art AI watermarking techniques exhibit significant technical limitations and drawbacks.
In their haste to pass the AI Act, EU policymakers may not have fully considered the technical complexities and limitations of AI watermarking. An unnamed European Commission official admitted to a reporter that the watermarking obligations were passed on the expectation that "over time, the technology will mature." However, the future of watermarking remains uncertain, and the EU's AI Office must now make informed decisions about implementing the law considering existing technology.
To avoid additional stumbles, the AI Office should let technology progress guide policy rather than the other way around. Moving forward with ineffective watermarks risks confusing consumers and overshadowing other efforts to tackle misinformation and content traceability.
Photo by Alexey Larionov on Unsplash
Key Insights
- The EU AI Act demands transparency for AI-generated content, which includes watermarking or marking the AI output to separate human-made and AI-generated content.
- Implementing AI watermarking effectively and reliably for various media types poses technical and legal challenges.
- Ensuring interoperability, robustness, and reliability in AI watermarking is challenging due to the trade-offs between these properties.
- There's significant legal uncertainty regarding the practical methods to fulfill the AI Act's watermarking obligations.
- Different types of AI-generated content may require distinct watermarking techniques, making uniformity and effectiveness across various formats and applications a significant limitation.
- There's a risk that sophisticated users could remove or alter watermarks, potentially undermining transparency and accountability in AI-generated content.
- Policymakers should exercise caution when implementing the AI Act's watermarking obligations, focusing on those types of media where the technology is demonstrably secure and robust.
- Moving forward with ineffective watermarks could confuse consumers and detract from efforts to combat misinformation and trace content's origins.
- In light of the EU's AI Act, AI system providers are required to implement watermarking as a means to identify AI-generated content, helping users discern it from human-created works.
- Although watermarking could serve as a potential solution for content labelling, it faces challenges in terms of effectiveness, especially for certain media types due to trade-offs between properties such as robustness, interoperability, and reliability.
- The lack of standardization among AI watermarking technologiesfurther complicates the situation, making it difficult to achieve uniformity across various media types and applications.
- To ensure the AI Act's watermarking obligations are implemented thoughtfully, the EU's AI Office should consider the technical complexities and limitations of existing watermarking technology, focusing on those types of media where the technology is secure and robust.