top of page

Deep dive into AI Risk Framework approach in ISO42001 (part 1)



ISO42001 is a new comprehensive framework to implement a new standard of “Responsible AI”. It includes the best practice to build comprehensive AI Management System (AIMS), that covers broad segments from AI design, development, monitoring, serving, remediation, and retraining.

Risk-based approach

ISO42001 brings many emphasize on the risk-based approach on the AI Management System, by emphasizing on creating control, and policies surrounding AI system. The goal of these AI based system risk system is :

  1. Provide clear risk-based AI process workflow to meet compliance posture, and the risk of AI produces unexpected behavior

  2. Integrates well with other Information Security Management System such as ISMS-P, SOC2, or ISO27001.

  3. Emphasizes on data privacy aspect, such as PII (Personally Identifiable Information), MNPI (Material Non-Public Information), NPI (Non Public Personal Information). For more comprehensive framework, we would recommend to take a look into ISO27017, cloud data and permission security control

What are the risks components addressed that need to be addressed by implementing ISO42001?

Data Risk

  • How do you maintain the data quality?

  • What are the data monitoring tools used, and how do you reduce inconsistence input?

  • What are the remediation steps to mitigate data loss or leakage?

  • Is there any regulatory impact if your system cannot recover loss of data?

Security Risk

  • How do we make sure our AI system is not prone to cyber-attack?

  • Do we have privilege administration roles access control for every employee?

  • Do we have the cloud / on-prem security infrastructure that can support this effort?


Continuity Risk

Continuity Risk is a type of risk associated with operation.

  • Can we modify our system to bring the new gen AI system? What’s the investment cost and pay off period of the investment?

  • Are we targeting the right business function to leverage the new AI system?

  • What’s the negative impact in case the AI system produces wrong info? What will be the impact to the operation, and finance of the company?

Vendor / 3rd party Risk

  • Does the 3rd party vendor such as 3rd party AI provider have cybersecurity risk assessed such as GDPR compliance, SOC2 or ISO27001 certified?

  • How do we monitor and manage for ongoing risks with the 3rd party vendor?

  • Does the vendor holds ‘key to the kingdom’ that might impact our operation if vendor’s system is compromised?


Risk is a big part of the AI governance.

The next write is we will dive deeper on how to build a risk and privacy centric AI system, that can fulfills ISO42001 framework.



To be continued

bottom of page