Back to Insights
Framework Guide
12 min read

KSA AI Governance: A Complete Guide to SDAIA Compliance and PDPL

Comprehensive guide to Saudi Arabia's AI governance framework — SDAIA's 4-tier risk classification, 7 AI Ethics Principles, and PDPL data protection compliance for organizations operating in the Saudi market.

·Starkguard Team
Share:

KSA AI Governance: What SDAIA Compliance Actually Requires

Saudi Arabia has moved faster on AI governance than most organizations realize. Between the Saudi Data and Artificial Intelligence Authority (SDAIA) publishing its AI Ethics Principles in September 2023, the Personal Data Protection Law (PDPL) entering full enforcement in September 2024, and sector-specific regulators like SAMA and SFDA issuing their own AI-adjacent requirements, the compliance surface area for organizations operating in the Kingdom has expanded substantially in under two years.

This is not a market where you can treat AI governance as aspirational. The Kingdom committed over $40 billion to AI initiatives under Vision 2030, including the $100 billion Project Transcendence announced in 2025. That investment comes paired with a regulatory apparatus designed to ensure AI deployment aligns with national values and economic objectives. Organizations deploying AI systems that touch Saudi residents, process data originating in the Kingdom, or serve Saudi-based clients face a layered set of obligations that draws from — but meaningfully differs from — European and American frameworks.

If you are already working with the EU AI Act or NIST AI RMF, some structural concepts will be familiar. The specifics, particularly around data localization and the role of SDAIA as a centralized authority, are distinct enough to require dedicated compliance planning.

SDAIA: The Central Authority You Cannot Ignore

SDAIA was established by Royal Decree in 2019 with a mandate that goes beyond regulation. It functions simultaneously as the Kingdom's national data authority, AI strategy office, and governance regulator. Unlike the fragmented regulatory landscape in the United States, or the distributed competent-authority model under the EU AI Act, SDAIA consolidates AI and data governance under one roof.

In practice, this means SDAIA sets the ethics framework, enforces the PDPL, manages the National Data Bank, and coordinates the National Strategy for Data and AI (NSDAI). When SDAIA publishes guidance, it carries the weight of the entity that also oversees enforcement. The AI Ethics Principles released in September 2023 are not suggestions from an advisory body — they represent the stated expectations of the authority that will evaluate your compliance.

SDAIA also operates the National Center for AI (NCAI), which provides technical resources, runs AI capability programs, and publishes implementation guidance. Organizations seeking to understand the practical expectations behind the ethics principles should monitor NCAI publications closely. The gap between principle and practice is where most compliance failures occur.

The 4-Tier Risk Classification

SDAIA's risk classification framework segments AI systems into four tiers based on the severity of potential impact. The structure resembles the EU AI Act's four-tier model, but the classification criteria and the practical obligations at each tier reflect Saudi-specific priorities.

Unacceptable Risk

AI systems that pose a direct threat to safety, security, or fundamental rights fall into this category and are prohibited outright. This includes social scoring systems used by public authorities for general-purpose citizen evaluation, real-time biometric identification in public spaces without explicit legal authorization, and AI systems designed to exploit vulnerabilities of specific groups — children, elderly individuals, persons with disabilities — in ways that cause harm.

The prohibition list is narrower than the EU AI Act's in some respects and broader in others. Systems that contradict Sharia principles or undermine national security receive particular scrutiny, reflecting the Kingdom's legal and cultural framework.

High-Risk

AI systems operating in areas with significant impact on individuals or society: healthcare diagnostics, credit scoring, employment screening, critical infrastructure management, law enforcement support, and educational assessment. High-risk systems face mandatory requirements including pre-deployment conformity assessments, ongoing monitoring obligations, human oversight provisions, and documentation requirements covering training data, model behavior, and known limitations.

Organizations deploying high-risk AI in the Saudi market should expect SDAIA to require evidence of compliance — not merely self-declaration. The direction of travel mirrors the EU's movement toward third-party conformity assessment for the highest-risk categories, though SDAIA has retained more discretion in how assessments are conducted.

Limited Risk

Systems that interact with individuals but pose moderate risk — chatbots, content recommendation engines, automated customer service systems. The primary obligation is transparency: users must be informed they are interacting with an AI system. Content generated by AI must be identifiable as such. These are not optional courtesies; they are enforceable requirements.

Minimal Risk

AI systems with negligible risk — spam filters, inventory optimization, internal analytics tools — face no specific obligations beyond general PDPL compliance for any personal data they process. This does not mean zero governance. Even minimal-risk systems that process personal data of Saudi residents trigger the full PDPL compliance stack.

The 7 SDAIA AI Ethics Principles

SDAIA's AI Ethics Principles, published in their current form in September 2023, establish seven pillars that organizations must operationalize — not merely acknowledge.

1. Fairness and Non-Discrimination. AI systems must not produce outcomes that unfairly disadvantage individuals or groups based on protected characteristics. This extends beyond algorithmic bias testing to encompass training data representativeness, deployment context analysis, and outcome monitoring across demographic segments. Organizations must demonstrate they have evaluated fairness throughout the AI lifecycle, not just at the point of model validation.

2. Transparency and Explainability. Individuals affected by AI-driven decisions have a right to understand how those decisions were reached. This does not require disclosing proprietary model architectures, but it does require providing meaningful explanations appropriate to the audience and context. A patient receiving an AI-assisted diagnosis and a credit applicant receiving an automated rejection both deserve explanations — but those explanations will differ in form and technical depth.

3. Reliability and Safety. AI systems must perform consistently within their intended operational parameters. This principle mandates testing, validation, and ongoing monitoring — including stress testing for edge cases and adversarial conditions. Systems deployed in safety-critical contexts face heightened reliability requirements, including redundancy and human fallback mechanisms.

4. Privacy and Data Protection. AI systems must comply with the PDPL and any sector-specific data protection requirements. Privacy-by-design principles apply: data minimization, purpose limitation, and storage limitation must be architected into AI systems from inception, not retrofitted as a compliance overlay.

5. Security. AI systems must be protected against manipulation, adversarial attacks, and unauthorized access. This covers the full attack surface: training data poisoning, model extraction, evasion attacks, and infrastructure-level security. Organizations must maintain security controls proportionate to the risk tier of the AI system.

6. Human Oversight. Meaningful human control must be maintained over AI decision-making processes, particularly for high-risk applications. "Meaningful" is the operative word — rubber-stamping automated outputs does not constitute oversight. Organizations must design review processes where human reviewers have the authority, information, and time to exercise genuine judgment.

7. Social and Environmental Responsibility. AI systems should contribute positively to society and minimize environmental impact. This principle encompasses the computational carbon footprint of large-scale model training, the labor practices involved in data annotation, and the broader socioeconomic effects of AI-driven automation. It also aligns AI deployment with Vision 2030's sustainability objectives.

These seven principles are not independent checkboxes. SDAIA expects organizations to demonstrate how they interrelate in practice. A fairness assessment that ignores transparency obligations, or a security program that undermines privacy protections, will not satisfy compliance expectations regardless of how thoroughly each individual principle is addressed.

PDPL Compliance: The Data Layer Beneath Every AI System

Royal Decree M/19 established the Personal Data Protection Law, which entered full enforcement in September 2024 following a two-year transition period. Every AI system that processes personal data of Saudi residents — regardless of where the processing organization is headquartered — must comply.

Processing personal data requires explicit consent or a qualifying legal basis. Consent must be informed, specific, freely given, and revocable. For AI systems, this creates particular challenges: if you train a model on personal data collected under one consent scope and later deploy the model for a different purpose, you need fresh consent or a new legal basis. Purpose limitation is strictly enforced.

Data Localization

This is where PDPL diverges most sharply from GDPR. Personal data of Saudi residents must be stored and processed within the Kingdom unless SDAIA grants an explicit exemption for cross-border transfer. Transfer exemptions require demonstrating that the receiving jurisdiction provides adequate data protection — and the adequacy assessment is conducted by SDAIA, not by the data controller.

For organizations running AI workloads on global cloud infrastructure, this means establishing Saudi-based processing environments. Major cloud providers now operate Saudi regions (AWS launched its Middle East region in Riyadh, Azure and GCP have followed), but the organizational and contractual arrangements required to ensure compliant data residency are non-trivial.

Breach Notification

Data breaches affecting personal data must be reported to SDAIA within 72 hours of discovery. Affected individuals must also be notified without undue delay. The 72-hour clock starts at discovery, not confirmation — organizations cannot delay notification while they investigate the scope of a breach. This aligns with GDPR's timeline but is enforced through SDAIA's consolidated authority.

Data Subject Rights

Saudi residents have the right to access their personal data, request correction, request deletion, restrict processing, and obtain a copy in a portable format. For AI systems, the right to access extends to meaningful information about automated decision-making processes — connecting back to the transparency and explainability ethics principle.

Data Protection Officer

Organizations processing personal data at scale or processing sensitive personal data must appoint a Data Protection Officer (DPO). The DPO must be accessible to SDAIA and to data subjects. Outsourcing the DPO function is permitted, but the organization remains responsible for compliance.

Sector-Specific Requirements

SAMA: Financial Services

The Saudi Central Bank (SAMA) has issued guidance on AI use in financial services that layers additional requirements on top of SDAIA's framework. Financial institutions deploying AI for credit decisions, fraud detection, anti-money laundering, or customer risk profiling must comply with SAMA's model risk management expectations, which draw heavily from established Basel Committee principles. Stress testing, model validation by independent parties, and audit trail requirements are standard expectations.

SAMA's outsourcing regulations also apply when financial institutions use third-party AI services. The accountability remains with the regulated entity, regardless of where the AI model was developed or is hosted.

SFDA: Healthcare and Life Sciences

The Saudi Food and Drug Authority regulates AI systems classified as medical devices or used in clinical decision support. AI-driven diagnostic tools, treatment recommendation systems, and medical imaging analysis software fall under SFDA's regulatory perimeter. Pre-market authorization requirements apply, with clinical evidence expectations that increase with the risk classification of the device.

SFDA's Digital Health Unit has been actively developing guidance specific to AI/ML-based Software as a Medical Device (SaMD). Organizations deploying healthcare AI in Saudi Arabia should expect regulatory expectations that parallel — and in some cases exceed — the FDA's approach to AI/ML-enabled devices.

CITC: Telecommunications

The Communications, Space, and Technology Commission (CITC) regulates AI applications within telecommunications and digital services. Automated content moderation, network optimization systems, and AI-driven customer interactions fall within CITC's purview. CITC's focus areas include content compliance with Saudi broadcasting standards, network resilience, and consumer protection in AI-mediated service delivery.

Key Deadlines and Timeline

The regulatory timeline has compressed significantly. PDPL entered full enforcement in September 2024 — organizations that have not completed their PDPL compliance programs are already operating outside the law. SDAIA has indicated that further implementing regulations detailing AI-specific obligations will be published in phases through 2026, with enforcement actions expected to follow.

Organizations entering the Saudi market should not wait for final implementing regulations. SDAIA has been clear that the AI Ethics Principles represent current expectations, and the PDPL's data processing requirements apply to all AI systems handling personal data today. Building compliance infrastructure now avoids the scramble that characterized GDPR's May 2018 enforcement date across Europe.

How Starkguard Supports KSA Compliance

Starkguard's governance platform maps directly to the compliance requirements outlined above. The platform's risk classification workflow supports SDAIA's 4-tier model, enabling organizations to classify AI systems and track obligations by risk tier. Assessment modules cover the 7 AI Ethics Principles through structured evaluations that produce auditable evidence — the documentation SDAIA expects when compliance is questioned.

For organizations managing multi-framework compliance, Starkguard's cross-framework mapping shows where SDAIA requirements overlap with the EU AI Act, NIST AI RMF, and ISO 42001. If you are already compliant with one framework, you can identify which KSA-specific gaps remain rather than starting from scratch.

The platform's action plan and remediation tracking features provide the structured follow-through that turns assessment results into measurable compliance improvements — documented, timestamped, and ready for regulatory review.

Getting Started: A Practical Checklist

  1. Inventory your AI systems that process data from Saudi residents or operate within the Kingdom. Include third-party AI services and embedded AI components.
  2. Classify each system against SDAIA's 4-tier risk framework. Document your classification rationale.
  3. Conduct a PDPL gap assessment covering consent mechanisms, data localization, breach notification procedures, and data subject rights processes.
  4. Map your AI systems against the 7 Ethics Principles. Identify where you have evidence of compliance and where gaps exist.
  5. Appoint a DPO if your processing activities require one. Ensure they have direct reporting lines and adequate resources.
  6. Establish in-Kingdom data processing for personal data of Saudi residents. Verify cloud infrastructure configurations and contractual arrangements.
  7. Identify sector-specific obligations from SAMA, SFDA, or CITC that apply to your AI deployments.
  8. Build ongoing monitoring processes. Compliance is not a one-time assessment — SDAIA expects continuous oversight proportionate to risk.
  9. Document everything. Regulatory inquiries will demand evidence of process, not just outcomes.

Understanding what AI governance requires at a foundational level helps contextualize these KSA-specific obligations within a broader governance program.


Ready to map your organization's AI governance posture against KSA requirements? Start a free trial to assess your current state, or request a demo to see how Starkguard structures multi-framework compliance across SDAIA, PDPL, and international standards.

Starkguard Team

AI Governance Experts

Tags:
ksa-ai
sdaia
pdpl
saudi-arabia
ai-governance
compliance
data-localization

Ready to implement AI governance?

Start your free trial and put these insights into practice with Starkguard.

Start Free Trial

Related Articles