Stakeholders, Explainability, and the Urgency to Engage with AI
Why sitting out AI governance is no longer an option — a MetaArchivist perspective, with a preview of my RIMPA presentation in Melbourne
AI is not a spectator sport. Stakeholders across records management, archives, compliance, IT, legal, and the executive suite must actively shape how AI is selected, deployed, explained, and audited. ISO/IEC JTC 1/SC 42’s work — including ISO/IEC TS 6254:2025 on explainability and interpretability — offers a concrete backbone for this engagement. The choice is simple: design for accountability now or retrofit under regulatory pressure later.
The Stakes: Records, Risk, and Real People
AI systems are already mediating who gets benefits, what records are created or discarded, and how decisions are justified (or not). In records and information governance, we’ve lived through waves of “black box” automation before — from opaque retention rules to legacy imaging systems with mysterious metadata behaviors. AI raises the stakes: model behavior can drift, inputs may be skewed, and outputs can embed or amplify bias at scale. These risks translate into accountability failures, audit blow‑ups, and reputational harm — and, most importantly, real impacts on people.
From Passive Awareness to Active Stewardship
The operational shift we need is from awareness to capability and stewardship. If your role touches information — archives, records, privacy, security, compliance, legal — you are already a stakeholder in AI. That means:
Anticipating uses (and misuses) of AI in business processes and recordkeeping.
Embedding controls that create durable evidence: provenance, data lineage, model versioning, decision rationale, and preservation.
Insisting on explanations that are fit for purpose: developer‑facing to debug; user‑facing to calibrate trust; regulator‑facing to demonstrate compliance; and citizen‑facing to uphold legitimacy.
Why SC 42 Matters Right Now
ISO/IEC JTC 1/SC 42 coordinates the international standards portfolio for AI. For records and governance professionals, three anchors are especially relevant:
ISO/IEC TS 6254:2025 — Objectives and approaches for explainability and interpretability of ML and AI systems. A practical map of why explainability is needed, who needs what kind of explanation, and how to choose approaches across the system life cycle.
ISO/IEC 22989 — AI concepts and terminology. Common language reduces policy noise and speeds risk conversations.
ISO/IEC 42001 — AI management system requirements. A governance shell that forces organizations to operationalize policy into processes, roles, and continual improvement. Together, these help move us from high‑level “trustworthy AI” talk to repeatable practice.
Explainability Is Not One Thing (and That’s the Point)
Too many discussions pit “explainability” against “interpretability,” or conflate both with “transparency.” ISO/IEC TS 6254 cuts through the noise by making explainability contextual and objective‑driven:
Objective: What are we trying to achieve (debugging, assurance, user trust, compliance)?
Audience: Who needs to understand (developers, end users, auditors, policymakers, impacted individuals)?
Lifecycle stage: Where are we (design, training, validation, deployment, monitoring, retirement)?
Approach: What methods fit (intrinsic model constraints, post‑hoc explanations, counterfactuals, feature attributions, example‑based, surrogate models, documentation, or process‑based evidence)? This framing lets governance teams select the right explanation for the right purpose, instead of chasing a mythical “one perfect explanation.”
What I’ll Cover at RIMPA Live 2025 — Melbourne, 29 October
Session: Developments in Information and Documentation Standards Around AI
Stage: The Palladium | Time: 12:45 – 1:10 pm AEDT
My RIMPA Live session explores how AI standards are converging across disciplines — from the governance frameworks of ISO/IEC JTC 1/SC 42, to the records management and authenticity requirements of ISO TC 46/SC 11, and the document integrity work in ISO TC 171/SC 2.
Key takeaways from the deck:
AI is reshaping information governance — automating processes, enhancing classification, and influencing how records are created, used, and preserved.
With opportunity comes risk — provenance, authenticity, and accountability can all be compromised without standards‑based oversight.
SC 42’s standards portfolio — including ISO/IEC 42001 (AI Management Systems), ISO/IEC 23894 (Risk Management), ISO/IEC 42105 (Oversight), and ISO/IEC TS 6254 (Explainability) — establishes the foundation for trustworthy AI.
Records managers play a pivotal role in integrating these controls: ensuring AI outputs, logs, and decision data are treated as records, maintained in standardized formats, and preserved for transparency and trust.
Emerging work in ISO/NP TS 25280 extends these principles — defining Records management and AI: Principles and considerations, with real‑world use cases from China, the Netherlands, the U.S., New Zealand, and Australia.
Ultimately, this session underscores a central truth: AI governance cannot be left solely to technologists. It requires the same disciplined evidence‑based mindset that archivists, records managers, and information professionals have always championed.
Notes & References
ISO/IEC JTC 1/SC 42, Information technology — Artificial intelligence — Objectives and approaches for explainability and interpretability of machine learning (ML) models and artificial intelligence (AI) systems, ISO/IEC TS 6254:2025, Edition 1 (2025).
ISO/IEC JTC 1/SC 42, Information technology — Artificial intelligence — Concepts and terminology, ISO/IEC 22989 (latest edition).
ISO/IEC JTC 1, Artificial intelligence management system — Requirements, ISO/IEC 42001 (latest edition).
ISO TC 46/SC 11, Records management and AI — Principles and considerations, ISO/NP TS 25280 (under development).


