I promised you a postcards from Berlin, and this one comes straight from PDF Days Europe Monday keynote. For those who haven’t been, PDF Days is the annual gathering of PDF standards makers, implementers, and users—a heady mix of developers, vendors, and the rest of us who come to see where document technology is headed. Three tracks run in parallel, from deep technical dives to use-cases and policy. It’s very much a standards crowd, but also where you can take the pulse of how PDFs and adjacent technologies are adapting to new challenges.
The keynote that opened the policy track came from Professor Dr. Philipp Hacker—a legal scholar with one foot in European policymaking and the other in the AI research community. Hacker has been deep in the trenches of the EU AI Act and its implications for those of us who work with digital evidence, documents, and long-term information stewardship.
I came in hoping to hear something beyond the hype, and that’s exactly what he offered.
From Cats and Dogs to Contracts and Compliance
Hacker opened with the basics—AI as systems that learn, not just rules we hard-code. He showed how generative models thrive on probability and randomness, which makes them powerful but unreliable. And then he pivoted to what we care about: document workflows.
Promise: AI can wrangle unstructured data, cluster contracts, spot compliance issues, detect fraud.
Reality: In law firms, associates are testing large language models, but after the hype they often end up redoing the work by hand because they can’t fully trust the machine. In coding, one controlled study showed tasks actually took longer when AI was used, because the review overhead wiped out any efficiency gains.
The warning for records managers is clear: pilot carefully, and measure whether AI really saves time or just shifts effort into review.
The Human Factor
What stuck with me was Hacker’s caution that AI doesn’t just change workflows—it can also sap job satisfaction. If experts feel reduced to “checkers of AI output” instead of creators and analysts, organizations may see morale and quality drop even if headcounts stay the same. That’s a risk every archives program or information office should consider before throwing AI into the mix.
The EU AI Act Comes for Everyone
Hacker then put on his policy hat and walked us through the AI Act. The key message:
Four risk levels (prohibited, high-risk, limited, minimal) now joined by a fifth category for general purpose AI like ChatGPT.
Providers vs. Deployers: if you brand an AI tool under your name, you’re on the hook as a provider—with full legal exposure. If you just use it, you’re a deployer.
Fine-tuning rule: less than one-third of the original compute and you’re safe as a deployer. More than that, you may become a provider without realizing it.
For archivists, the punchline is that EU rules apply globally: if your systems or vendors touch the European market, compliance matters.
Deepfakes and Provenance
Finally, he reminded us that AI-manipulated content now needs labeling. Watermarking is moving beyond big red stamps into cryptographic signals embedded in pixels. For those of us worried about authenticity, provenance, and evidentiary trust, this is where policy and archival principle align: every document may soon carry invisible signals of its origin and integrity.
Why It Matters for Our Community
Authenticity: AI labeling is about to become a regulatory requirement, not just a preservation best practice.
Compliance: Records teams can’t leave “provider vs. deployer” decisions to IT or Legal—our work is on the front line of risk.
Job Design: As AI enters our workflows, preserving the quality of archival work may be as important as measuring efficiency.
As Hacker closed, he called the future “exciting, one way or another.” I left thinking that for archives and records, “one way or another” will depend on how well we balance the lure of efficiency with the discipline of provenance and compliance.