AS ISO IEC 23894-2023 PDF
Name in English:
St AS ISO IEC 23894-2023
Name in Russian:
Ст AS ISO IEC 23894-2023
Original standard AS ISO IEC 23894-2023 in PDF full version. Additional info + preview on request
Full title and description
ISO/IEC 23894:2023 — Information technology — Artificial intelligence — Guidance on risk management. This international standard provides guidance for organisations that develop, produce, deploy or use products, systems and services that utilise artificial intelligence (AI) on identifying, assessing, treating and monitoring AI‑related risks, and on integrating risk management into AI activities and processes.
Abstract
This document offers practical guidance to help organisations manage risks specifically related to AI across the AI lifecycle. It describes processes for identifying AI risks, assessing and treating them, assigning roles and responsibilities, and implementing monitoring and continuous improvement so that risk management becomes part of AI design, development and deployment. The guidance is intended to be tailored to an organisation’s context and scale.
General information
- Status: Published.
- Publication date: February 2023 (Edition 1, 2023-02).
- Publisher: Joint ISO/IEC publication (ISO/IEC JTC 1/SC 42).
- ICS / categories: 35.020 — Information technology (IT) in general; AI standards family (ISO/IEC JTC 1/SC 42).
- Edition / version: Edition 1 (2023).
- Number of pages: 26 (official ISO bibliographic record).
Scope
Guidance is provided for organisations of any size and sector that develop, produce, deploy or use AI-enabled products, systems or services. The standard focuses on AI‑specific aspects of risk management and on how to integrate those activities with existing organisational risk processes; it does not prescribe a single mandatory process but offers adaptable guidance and processes that can be applied in context.
Key topics and requirements
- Principles for AI risk management and integration with organisational risk practices.
- Processes to identify AI‑specific threats, hazards and potential harms across the AI lifecycle.
- Risk assessment approaches (likelihood, impact, context‑sensitive analysis) and risk prioritisation.
- Risk treatment options and practical controls/mitigations, including monitoring and validation activities.
- Roles, responsibilities and governance to ensure accountability for AI risk management.
- Documentation, evidence and continual monitoring / review to support ongoing risk control and improvement.
- Customisation guidance so organisations can scale and adapt risk management to their context and application criticality.
Typical use and users
Primary users include AI developers, product and engineering teams, risk and compliance managers, security and safety officers, internal auditors, procurement teams and executive sponsors responsible for AI governance. Typical applications include embedding risk practices into AI system design, supplier assessments, deployment checklists, compliance and assurance activities, and internal/external audits.
Related standards
ISO/IEC 23894:2023 sits within the ISO/IEC JTC 1/SC 42 AI standards family and is commonly used alongside foundational and management standards such as ISO/IEC 22989 (AI — concepts and terminology), ISO/IEC TR 24028 (overview of trustworthiness in AI) and AI management or impact standards such as ISO/IEC 42001 (AI management systems) and ISO/IEC 42005 (AI system impact assessment). Regional/adoption editions (for example EN ISO/IEC 23894:2024 / national adoptions) exist where the ISO text is adopted as a regional or national standard.
Keywords
AI risk management, artificial intelligence, risk assessment, mitigation, governance, AI lifecycle, SC 42, ISO/IEC 23894, trustworthiness, monitoring, organisational risk integration.
FAQ
Q: What is this standard?
A: ISO/IEC 23894:2023 is an international guidance standard that provides organisations with recommended practices for managing risks that arise from the use, development and deployment of AI systems. It is a non-prescriptive guidance document intended to be adapted to organisational context.
Q: What does it cover?
A: It covers identification, assessment, treatment and monitoring of AI‑specific risks; governance and roles; integration of AI risk activities into organisational processes; and tailoring guidance so measures are proportionate to application criticality. It does not mandate a single prescriptive process but supplies adaptable guidance.
Q: Who typically uses it?
A: Organisations that build, buy or operate AI solutions — including AI engineers, product owners, risk and compliance teams, safety/security officers and auditors — use the standard to structure and evidence their AI risk management practices.
Q: Is it current or superseded?
A: The ISO bibliographic record shows ISO/IEC 23894 published in February 2023 (Edition 1, 2023) and listed as published. In some regions the ISO text has been adopted as regional/national editions (for example EN/BS adoptions published in 2024). Users should check their national standards body for any local adoption or editorial changes.
Q: Is it part of a series?
A: Yes — it is part of the ISO/IEC JTC 1/SC 42 family of AI standards and is intended to work together with complementary documents covering terminology, trustworthiness, management systems, impact assessment and related topics (examples include ISO/IEC 22989, ISO/IEC TR 24028 and ISO/IEC 42001/42005).
Q: What are the key keywords?
A: AI risk management, risk assessment, governance, lifecycle, mitigation, monitoring, trustworthiness, ISO/IEC JTC 1/SC 42, AI assurance.