Skip to main content

JSP 936 AI Assurance Documentation

Generate comprehensive JSP 936 compliance documentation for AI/ML systems in UK Ministry of Defence projects.

Command

arckit jsp-936 <system name>

Arguments

  • system (required): AI system name with classification (e.g., “Target Recognition AI OFFICIAL-SENSITIVE”)

Examples

arckit jsp-936 "Target Recognition AI OFFICIAL-SENSITIVE"
arckit jsp-936 "Logistics Prediction System OFFICIAL"

About JSP 936

JSP 936 - Dependable Artificial Intelligence (AI) in Defence is the UK Ministry of Defence’s principal policy framework for safe and responsible AI adoption. Published November 2024, it establishes:
  • 5 Ethical Principles: Human-Centricity, Responsibility, Understanding, Bias & Harm Mitigation, Reliability
  • 5 Risk Classification Levels: Critical, Severe, Major, Moderate, Minor
  • 8 AI Lifecycle Phases: Planning, Requirements, Architecture, Algorithm Design, Model Development, Verification & Validation, Integration & Use, Quality Assurance
  • Governance Structure: RAISOs (Responsible AI Senior Officers), Ethics Managers, Independent Assurance
  • Approval Pathways: Ministerial (2PUS) → Defence-Level (JROC/IAC) → TLB-Level

The 5 Ethical Principles

Principle 1: Human-Centricity

Requirement: Assess and consider the impact of AI on humans, ensuring positive effects outweigh negatives. Document:
  • Human impact analysis (operators, civilians, decision-makers)
  • Positive and negative effects
  • Human-AI interaction design
  • Stakeholder engagement

Principle 2: Responsibility

Requirement: Ensure meaningful human control and clear accountability. Document:
  • Accountability mapping (roles and responsibilities)
  • Meaningful human control (in-loop, on-loop, out-of-loop)
  • Decision authority
  • Override mechanisms

Principle 3: Understanding

Requirement: Relevant personnel must understand how AI systems function and interpret outputs. Document:
  • Explainability requirements
  • Training programme (AI literacy, system-specific)
  • Documentation (model cards, operating procedures)
  • Performance boundaries

Principle 4: Bias and Harm Mitigation

Requirement: Proactively identify and reduce unintended biases and negative consequences. Document:
  • Bias assessment (training data, performance disparities)
  • Harm identification (direct, indirect, systemic)
  • Mitigation strategies
  • Continuous monitoring

Principle 5: Reliability

Requirement: Demonstrate robust, secure performance across operational contexts. Document:
  • Performance bounds (design domain, metrics)
  • Robustness testing (adversarial resilience, graceful degradation)
  • Security measures (AI-specific threats)
  • Failure modes and effects analysis (FMEA)

Risk Classification Matrix

Calculate: Risk Score = Likelihood × Impact
ScoreClassificationApproval Pathway
20-25Critical2PUS or Ministers
15-19SevereDefence-Level (JROC/IAC)
10-14MajorDefence-Level (JROC/IAC)
5-9ModerateTLB-Level (delegated)
1-4MinorTLB-Level (delegated)

The 8 AI Lifecycle Phases

  1. Planning - AI strategy, algorithm development roadmap, data governance
  2. Requirements - Performance specifications with hazard analysis
  3. Architecture - System architecture with traceability and failure protections
  4. Algorithm Design - Algorithm decisions with verification methods
  5. Model Development - Train and evaluate model with risk understanding
  6. Verification & Validation - Demonstrate performance across realistic scenarios
  7. Integration & Use - Deploy with monitoring and human oversight
  8. Quality Assurance - Independent assessment and continuous improvement

Output

Generates ARC-{PROJECT_ID}-JSP936-v{VERSION}.md with:
  • Executive summary with risk classification and approval pathway
  • AI component discovery and mapping
  • Ethical risk assessment (5×5 matrix)
  • Detailed assessment for all 5 ethical principles
  • Lifecycle phase documentation (all 8 phases)
  • Performance metrics and testing results
  • Deployment readiness assessment
  • Independent assurance recommendations

Prerequisites

MANDATORY (warn if missing):
  • PRIN (Architecture Principles) - AI governance standards, defence tech constraints
  • REQ (Requirements) - AI/ML-related FR, NFR (security, safety), data requirements
RECOMMENDED (read if available):
  • RISK (Risk Register) - AI safety risks, operational risks
  • AIPB (AI Playbook) - Risk level, human oversight model (if civilian interface)

Unacceptable Risk Criteria

STOP IMMEDIATELY if:
  • Significant negative impacts are imminent
  • Severe harms are occurring
  • Catastrophic risks present
  • System behaving outside acceptable bounds

AI Component Types

The command identifies and documents:
  • Machine Learning Models (supervised, unsupervised, reinforcement, deep learning)
  • AI Algorithms (decision trees, SVMs, Bayesian networks, expert systems)
  • Autonomous Systems (vehicles, drones, robotic systems)
  • Decision Support Systems (recommendation engines, risk assessment, predictive analytics)
  • Natural Language Processing (chatbots, text classification, translation)
  • Computer Vision (object detection, face recognition, image classification)
  • Generative AI (LLMs, image generation, synthetic data)
  • arckit mod-secure - MOD Secure by Design (complementary security assessment)
  • arckit ai-playbook - UK Government AI Playbook (for civilian-facing components)
  • arckit atrs - ATRS record (if system has civilian interface)

Resources

  • JSP 936: Dependable Artificial Intelligence in Defence
  • MOD Responsible AI Strategy
  • Defence AI Centre (DAIC)
  • Defence Science and Technology Laboratory (Dstl) AI Guidance

Build docs developers (and LLMs) love