Teamwork with the Machine
Human–Machine Teaming Moves Forward in ISO/IEC JTC 1/SC 42
(Catching up after the JTC 1/SC 42 Plenary, Sydney, October 2025)
I wasn’t in Sydney for the October meetings of ISO/IEC JTC 1/SC 42 on Artificial Intelligence, but the outcomes have been impossible to miss. Conversations that began as abstract discussions about collaboration between people and intelligent systems have now taken formal shape in ISO/IEC 25589, Information technology — Artificial intelligence — Framework for Human–Machine Teaming.
The project feels like a turning point. It shows how SC 42 is beginning to treat human involvement as an essential part of AI’s technical architecture rather than an afterthought.
The project finds its footing
Within Working Group 4 (Use Cases and Applications), ISO/IEC 25589 is progressing toward the Committee Draft stage in 2026. Convenor Nobu Hosokawa and editor Yuchang Cheng are guiding an interconnected group of projects that together define what human–machine cooperation looks like in practice.
Alongside 25589 are several closely linked efforts:
TR 42109 exploring use cases of human–machine teaming
TR 24030 (third edition) expanding the catalog of AI use cases
PWI 25880 providing organizational guidance for adopting human–machine teaming
and a new NP on socio-technical system modelling and analysis for AI applications
Each piece contributes to a larger whole. The use cases provide real-world grounding, the socio-technical model captures the system-level interactions, and the implementation guide connects the framework to day-to-day operations.
Socio-technical thinking comes into focus
The socio-technical project in particular represents a clear evolution in how SC 42 views AI. It recognizes that intelligent systems operate inside organizations, workflows, and social contexts. Where 25589 defines the overall framework for cooperation, the socio-technical standard provides a structured way to analyze how humans and AI systems influence each other over time.
That pairing reflects a broader shift within the committee toward understanding AI as a living system that must align with human judgment, institutional responsibility, and collective values.
From systems to relationships
I have followed SC 42 for several years, and this feels like a moment when the conversation is changing. The work is moving beyond technical performance toward understanding relationships of agency and accountability. Human–machine teaming captures that shift.
It is about clarity of roles, trust in process, and shared responsibility for results. These are not abstract ideas anymore. They are becoming measurable characteristics within a formal standard, shaping how teams that include AI systems will be designed, managed, and governed.
Recognition and coordination
Human–machine teaming has also been identified as one of the committee’s cross-cutting themes, alongside agentic AI and generative AI. This ensures coordination among working groups and keeps the topic visible at the strategic level.
WG 4 will continue to meet regularly through 2026, with publication of ISO/IEC 25589 expected later in the decade. The framework will be joined by its companion standards on implementation and socio-technical modelling, forming a consistent structure that connects concept, method, and practice.
Looking ahead
To me, the significance of this work lies in how it reframes what “trustworthy AI” means. SC 42 is beginning to define not just technical safeguards or risk controls but the actual dynamics of collaboration between humans and intelligent systems.
ISO/IEC 25589 is where those ideas start to become operational. It is the point where governance meets teamwork, where oversight and autonomy must coexist. The framework may be a technical document, but its purpose is fundamentally human.


