Skip to main content

Algorithmic Transparency Recording Standard (ATRS)

Generate an ATRS record for AI or algorithmic tools used in UK government, following the two-tier standard for transparency.

Command

arckit atrs <AI tool name>

Arguments

  • tool (required): AI tool or algorithmic system name

Examples

arckit atrs "Benefit Eligibility Scoring Model"
arckit atrs "Fraud Detection Algorithm"

Purpose

ATRS is MANDATORY for all central government departments and arm’s length bodies using AI or algorithmic tools. The two-tier structure provides:
  • Tier 1: Public summary for general public (clear, jargon-free)
  • Tier 2: Detailed technical information for specialists

ATRS Requirements

Mandatory for:
  • All central government departments
  • Arm’s length bodies
  • Algorithmic tools used in decision-making
  • AI systems affecting citizens
Publication:

Tier 1 - Summary Information (Public)

Key Fields:
  • Name: Tool identifier
  • Description: 1-2 sentence plain English summary
  • Website URL: Link to more information
  • Contact Email: Public contact
  • Organization: Department/agency name
  • Function: Area (benefits, healthcare, policing, etc.)
  • Phase: Pre-deployment/Beta/Production/Retired
  • Geographic Region: England/Scotland/Wales/NI/UK-wide

Tier 2 - Detailed Information (Specialists)

Section 1: Owner and Responsibility

  • Organization and team
  • Senior Responsible Owner (name, role, accountability)
  • External suppliers (names, Companies House numbers, roles)
  • Procurement procedure type
  • Data access terms for suppliers

Section 2: Description and Rationale

  • Detailed technical description
  • Algorithm type (rule-based, ML, generative AI, etc.)
  • AI model details (provider, version, fine-tuning)
  • Scope and boundaries
  • Benefits and impact metrics
  • Alternatives considered

Section 3: Decision-Making Process

  • Process integration (role in workflow)
  • Provided information (outputs and format)
  • Frequency and scale of usage
  • Human decisions and review
  • Required training for staff
  • Appeals and contestability

Section 4: Data

  • Data sources (types, origins, fields used)
  • Personal data and special category data
  • Data sharing arrangements
  • Data quality and maintenance
  • Data storage location and security
  • Encryption, access controls, audit logging

Section 5: Impact Assessments

  • DPIA status, date, outcome, risks
  • EqIA: Protected characteristics, impacts, mitigations
  • Human Rights Assessment
  • Other assessments (environmental, accessibility, security)

Section 6: Fairness, Bias, and Discrimination

  • Bias testing completed (methodology, date)
  • Fairness metrics (demographic parity, equalized odds, etc.)
  • Results by protected characteristic
  • Known limitations and biases
  • Ongoing bias monitoring

Section 7: Technical Details

  • Model performance metrics (accuracy, precision, recall, F1)
  • Performance by demographic group
  • Model explainability approach
  • Model versioning and change management
  • Retraining schedule

Section 8: Testing and Assurance

  • Testing approach (unit, integration, UAT, A/B, red teaming)
  • Edge cases and failure modes
  • Fallback procedures
  • Security testing (pen testing, AI-specific threats)
  • Independent assurance and external audit

Section 9: Transparency and Explainability

  • Public disclosure (website, GOV.UK, model card)
  • User communication
  • Information provided to users
  • Model card published

Section 10: Governance and Oversight

  • Governance structure
  • Risk register and top risks
  • Incident management
  • Audit trail

Section 11: Compliance

  • Legal basis (primary legislation, regulatory compliance)
  • Data protection (controller, DPO, ICO registration)
  • Standards compliance (TCoP, GDS Service Standard, Data Ethics Framework)
  • Procurement compliance

Section 12: Performance and Outcomes

  • Success metrics and KPIs
  • Benefits realized (with evidence)
  • User feedback and satisfaction
  • Continuous improvement log

Section 13: Review and Updates

  • Review schedule (frequency, next review date)
  • Triggers for unscheduled review
  • Version history
  • Contact for updates

Output

Generates ARC-{PROJECT_ID}-ATRS-v{VERSION}.md with:
  • Complete Tier 1 and Tier 2 sections
  • Completeness summary (percentage of fields complete)
  • Blocking issues list (must resolve before publication)
  • Warnings (should address)
  • Publication guidance

Risk-Appropriate Guidance

For HIGH-RISK tools

  • DPIA is MANDATORY before deployment
  • EqIA is MANDATORY
  • Human-in-the-loop STRONGLY RECOMMENDED
  • Bias testing across ALL protected characteristics REQUIRED
  • ATRS publication on GOV.UK MANDATORY
  • Quarterly reviews RECOMMENDED
  • Independent audit STRONGLY RECOMMENDED

For MEDIUM-RISK tools

  • DPIA likely required
  • EqIA recommended
  • Human oversight required (human-on-the-loop minimum)
  • Bias testing recommended
  • ATRS publication MANDATORY
  • Annual reviews

For LOW-RISK tools

  • DPIA assessment (may determine not required)
  • Basic fairness checks
  • Human oversight recommended
  • ATRS publication MANDATORY
  • Periodic reviews

Prerequisites

MANDATORY (warn if missing):
  • PRIN (Architecture Principles) - AI governance standards
  • REQ (Requirements) - AI/ML-related requirements
RECOMMENDED (read if available):
  • AIPB (AI Playbook Assessment) - Risk level, human oversight model, ethical assessment

Publication Process

After generating the ATRS record:
  1. Complete missing mandatory fields
  2. Get SRO approval
  3. Legal/compliance review
  4. DPO review
  5. Publish on GOV.UK ATRS repository
  6. Publish on department website
  7. Set review date
  • arckit ai-playbook - AI Playbook assessment (run first for AI systems)
  • arckit dpia - Data Protection Impact Assessment
  • arckit tcop - Technology Code of Practice

Resources

Build docs developers (and LLMs) love