UK Government AI Playbook & Algorithmic Transparency
ArcKit provides two commands for responsible AI governance in UK Government:/arckit.ai-playbook- Assess compliance with UK Government AI Playbook (10 principles + 6 ethical themes)/arckit.atrs- Generate Algorithmic Transparency Recording Standard (ATRS) record
Command: /arckit.ai-playbook
What is the AI Playbook?
The UK Government AI Playbook is MANDATORY guidance for all UK Government AI systems. It provides a framework for responsible AI deployment.Usage
- Project ID or AI system name
Output: ARC-{PROJECT_ID}-AIPB-v1.0.md
Generates a comprehensive AI Playbook assessment with:
- Overall score (X/160 points, Y%)
- Risk level (High/Medium/Low)
- Compliance status (Excellent/Good/Adequate/Poor)
- Go/No-Go decision
AI Risk Classification
Determine risk level based on decision authority and impact:HIGH-RISK AI
Fully automated decisions affecting:- Health and safety
- Fundamental rights
- Access to services (benefits, healthcare)
- Legal status (immigration, criminal justice)
- Employment
- Financial circumstances
- MUST score ≥90% on AI Playbook
- ALL 10 principles + 6 themes met
- Human-in-the-loop REQUIRED (review every decision)
- ATRS publication MANDATORY
- DPIA, EqIA, Human Rights assessment MANDATORY
- Quarterly audits REQUIRED
- AI Governance Board approval REQUIRED
MEDIUM-RISK AI
Semi-automated decisions with human review:- Significant resource allocation
- Case prioritization
- Fraud detection scoring
- SHOULD score ≥75%
- Critical principles met (Lawful/Ethical, Security, Human Control)
- Strong human oversight required
- ATRS recommended
- DPIA likely required
- Annual audits
LOW-RISK AI
Productivity/administrative:- Recommendation systems with human control
- Administrative automation
- Email categorization, meeting scheduling, document summarization
- SHOULD score ≥60%
- Basic safeguards in place
- Human oversight recommended
- ATRS publication MANDATORY (all AI systems)
- Periodic review (annual)
The 10 Core Principles
Principle 1: Understanding AI
- Team understands AI limitations (no reasoning, contextual awareness)
- Realistic expectations (hallucinations, biases, edge cases)
- Appropriate use case for AI capabilities
Principle 2: Lawful and Ethical Use
CRITICAL requirements:- DPIA completed (mandatory for personal data)
- EqIA (Equality Impact Assessment) completed
- Human Rights assessment completed
- UK GDPR compliance
- Equality Act 2010 compliance
- Data Ethics Framework applied
Principle 3: Security
- Cyber security assessment (NCSC guidance)
- AI-specific threats assessed:
- Prompt injection (for LLMs)
- Data poisoning
- Model theft
- Adversarial attacks
- Model inversion
- Security controls implemented
- Red teaming conducted (for high-risk)
Principle 4: Human Control
CRITICAL for HIGH-RISK:- Human-in-the-loop required (review EVERY decision)
- Human override capability
- Escalation process documented
- Staff trained on AI limitations
- Clear responsibilities assigned
- Human-in-the-loop: Review every decision (required for high-risk)
- Human-on-the-loop: Periodic/random review
- Human-in-command: Can override at any time
- Fully automated: AI acts autonomously (HIGH-RISK - justify!)
Principle 5: Lifecycle Management
- Lifecycle plan documented (selection → decommissioning)
- Model versioning and change management
- Monitoring and performance tracking
- Model drift detection
- Retraining schedule
Principle 6: Right Tool Selection
- Problem clearly defined
- Alternatives considered (non-AI, simpler solutions)
- Cost-benefit analysis
- AI adds genuine value
- Success metrics defined
Principle 7: Collaboration
- Cross-government collaboration (GDS, CDDO, AI Standards Hub)
- Academia, industry, civil society engagement
- Knowledge sharing
Principle 8: Commercial Partnership
- Procurement team engaged early
- Contract includes AI-specific terms:
- Performance metrics and SLAs
- Explainability requirements
- Bias audits
- Data rights and ownership
- Exit strategy (data portability)
- Liability for AI failures
Principle 9: Skills and Expertise
Team composition:- AI/ML technical expertise
- Data science
- Ethical AI expertise
- Domain expertise
- User research
- Legal/compliance
- Cyber security
- Training on AI fundamentals, ethics, bias
Principle 10: Organizational Alignment
- AI Governance Board approval
- AI strategy alignment
- Senior Responsible Owner (SRO) assigned
- Assurance team engaged
- Risk management process followed
The 6 Ethical Themes
Theme 1: Safety, Security, and Robustness
- Safety testing (no harmful outputs)
- Robustness testing (edge cases)
- Fail-safe mechanisms
- Incident response plan
Theme 2: Transparency and Explainability
MANDATORY:- ATRS published (see below)
- System documented publicly (where appropriate)
- Decision explanations available to affected persons
- Model card/factsheet published
Theme 3: Fairness, Bias, and Discrimination
- Bias assessment completed
- Training data reviewed for bias
- Fairness metrics calculated across protected characteristics:
- Gender
- Ethnicity
- Age
- Disability
- Religion
- Sexual orientation
- Bias mitigation techniques applied
- Ongoing monitoring for bias drift
Theme 4: Accountability and Responsibility
- Clear ownership (SRO, Product Owner)
- Decision-making process documented
- Audit trail of all AI decisions
- Incident response procedures
- Accountability for errors defined
Theme 5: Contestability and Redress
- Right to contest AI decisions enabled
- Human review process for contested decisions
- Appeal mechanism documented
- Redress process for those harmed
- Response times defined (e.g., 28 days)
Theme 6: Societal Wellbeing and Public Good
- Positive societal impact assessment
- Environmental impact considered (carbon footprint)
- Benefits distributed fairly
- Negative impacts mitigated
- Alignment with public values
Scoring and Decision Framework
10 Principles (10 points each = 100 total) 6 Ethical Themes (10 points each = 60 total) Total: 160 points Compliance Status:- Excellent: ≥90% (144+ points)
- Good: 75-89% (120-143 points)
- Adequate: 60-74% (96-119 points)
- Poor: <60% (<96 points)
- HIGH-RISK: MUST score ≥90%, ALL principles met, cannot deploy otherwise
- MEDIUM-RISK: SHOULD score ≥75%, critical principles met
- LOW-RISK: SHOULD score ≥60%, basic safeguards in place
Command: /arckit.atrs
What is ATRS?
The Algorithmic Transparency Recording Standard is MANDATORY for all central government departments and arm’s length bodies using algorithmic decision-making.Usage
- AI tool name or project ID
Output: ARC-{PROJECT_ID}-ATRS-v1.0.md
Generates a two-tier ATRS record:
- Tier 1: Public summary (plain English, non-technical)
- Tier 2: Detailed technical information (13 sections)
ATRS Structure
Tier 1: Summary Information (for general public)
Clear, simple, jargon-free language:- Name: Tool identifier
- Description: 1-2 sentence plain English summary
- Website URL: Link to more information
- Contact Email: Public contact
- Organization: Department/agency name
- Function: Area (benefits, healthcare, policing, etc.)
- Phase: Pre-deployment/Beta/Production/Retired
- Geographic Region: England/Scotland/Wales/NI/UK-wide
Tier 2: Detailed Information (for specialists)
13 sections:-
Owner and Responsibility
- Organization and team
- Senior Responsible Owner (name, role, accountability)
- External suppliers (names, Companies House numbers, roles)
- Procurement procedure type
- Data access terms
-
Description and Rationale
- Detailed technical description
- Algorithm type (rule-based, ML, generative AI)
- AI model details (provider, version, fine-tuning)
- Scope and boundaries
- Benefits and impact metrics
- Previous process
- Alternatives considered
-
Decision-Making Process
- Process integration (role in workflow)
- Provided information (outputs)
- Frequency and scale of usage
- Human decisions and review (in-loop/on-loop/out-of-loop)
- Required training for staff
- Appeals and contestability
-
Data
- Data sources (types, origins, fields)
- Personal data and special category data
- Data sharing arrangements
- Data quality and maintenance
- Data storage location and security
- Encryption, access controls, audit logging
-
Impact Assessments
- DPIA (status, date, outcome, risks)
- EqIA (protected characteristics, impacts, mitigations)
- Human Rights Assessment
- Other assessments
-
Fairness, Bias, and Discrimination
- Bias testing completed
- Fairness metrics
- Results by protected characteristic
- Known limitations and biases
- Training data bias review
- Ongoing bias monitoring
-
Technical Details
- Model performance metrics (accuracy, precision, recall, F1)
- Performance by demographic group
- Model explainability approach (SHAP, LIME)
- Model versioning
- Model monitoring and drift detection
- Retraining schedule
-
Testing and Assurance
- Testing approach
- Edge cases and failure modes
- Fallback procedures
- Security testing (prompt injection, data poisoning, adversarial attacks)
- Independent assurance
-
Transparency and Explainability
- Public disclosure (website, model card)
- User communication
- Information provided to users
-
Governance and Oversight
- Governance structure
- Risk register
- Incident management
- Audit trail
-
Compliance
- Legal basis
- Data protection (controller, DPO, ICO registration, legal basis)
- Standards compliance (TCoP, Service Standard, Data Ethics Framework)
- Procurement compliance
-
Performance and Outcomes
- Success metrics and KPIs
- Benefits realized
- User feedback
- Continuous improvement
-
Review and Updates
- Review schedule
- Triggers for unscheduled review
- Version history
ATRS Publication Requirements
Mandatory for:- All central government departments
- Arm’s length bodies
- Any algorithmic decision-making affecting citizens
- Publish on GOV.UK ATRS repository
- Publish on department website
- Update when significant changes occur
- Regular reviews (annually minimum, quarterly for high-risk)
- Security vulnerabilities
- Personal data
- Commercially sensitive details
Integration with Other Commands
AI Playbook feeds into ATRS:- Use
/arckit.ai-playbookfirst to assess compliance - Then use
/arckit.atrsto generate publication record - ATRS Tier 2 Section 5 (Impact Assessments) uses AI Playbook scores
/arckit.dpia- Generate DPIA (ATRS Section 5)/arckit.secure- Security assessment (ATRS Section 4, 8)/arckit.requirements- AI/ML requirements (ATRS Section 2)/arckit.data-model- Training data (ATRS Section 4)/arckit.tcop- TCoP Point 13 (Responsible AI use)
Resources
AI Playbook: ATRS:Example High-Risk AI Workflow
-
Discovery/Alpha:
- Use
/arckit.ai-playbookto assess viability - Identify HIGH-RISK classification
- Complete DPIA, EqIA, Human Rights assessment
- Design human-in-the-loop process
- Use
-
Beta:
- Implement all AI Playbook principles
- Conduct bias testing across protected characteristics
- Complete security assessment (red teaming)
- Draft ATRS record using
/arckit.atrs - Re-run
/arckit.ai-playbook- must score ≥90%
-
Pre-Live:
- AI Governance Board approval
- Senior leadership sign-off
- Publish ATRS on GOV.UK
- User communication (how to contest decisions)
-
Live:
- Quarterly audits
- Continuous bias monitoring
- Incident response capability
- Annual ATRS updates