JSP 936 AI Assurance Documentation
Generate comprehensive JSP 936 compliance documentation for AI/ML systems in UK Ministry of Defence projects.Command
Arguments
- system (required): AI system name with classification (e.g., “Target Recognition AI OFFICIAL-SENSITIVE”)
Examples
About JSP 936
JSP 936 - Dependable Artificial Intelligence (AI) in Defence is the UK Ministry of Defence’s principal policy framework for safe and responsible AI adoption. Published November 2024, it establishes:- 5 Ethical Principles: Human-Centricity, Responsibility, Understanding, Bias & Harm Mitigation, Reliability
- 5 Risk Classification Levels: Critical, Severe, Major, Moderate, Minor
- 8 AI Lifecycle Phases: Planning, Requirements, Architecture, Algorithm Design, Model Development, Verification & Validation, Integration & Use, Quality Assurance
- Governance Structure: RAISOs (Responsible AI Senior Officers), Ethics Managers, Independent Assurance
- Approval Pathways: Ministerial (2PUS) → Defence-Level (JROC/IAC) → TLB-Level
The 5 Ethical Principles
Principle 1: Human-Centricity
Requirement: Assess and consider the impact of AI on humans, ensuring positive effects outweigh negatives. Document:- Human impact analysis (operators, civilians, decision-makers)
- Positive and negative effects
- Human-AI interaction design
- Stakeholder engagement
Principle 2: Responsibility
Requirement: Ensure meaningful human control and clear accountability. Document:- Accountability mapping (roles and responsibilities)
- Meaningful human control (in-loop, on-loop, out-of-loop)
- Decision authority
- Override mechanisms
Principle 3: Understanding
Requirement: Relevant personnel must understand how AI systems function and interpret outputs. Document:- Explainability requirements
- Training programme (AI literacy, system-specific)
- Documentation (model cards, operating procedures)
- Performance boundaries
Principle 4: Bias and Harm Mitigation
Requirement: Proactively identify and reduce unintended biases and negative consequences. Document:- Bias assessment (training data, performance disparities)
- Harm identification (direct, indirect, systemic)
- Mitigation strategies
- Continuous monitoring
Principle 5: Reliability
Requirement: Demonstrate robust, secure performance across operational contexts. Document:- Performance bounds (design domain, metrics)
- Robustness testing (adversarial resilience, graceful degradation)
- Security measures (AI-specific threats)
- Failure modes and effects analysis (FMEA)
Risk Classification Matrix
Calculate: Risk Score = Likelihood × Impact| Score | Classification | Approval Pathway |
|---|---|---|
| 20-25 | Critical | 2PUS or Ministers |
| 15-19 | Severe | Defence-Level (JROC/IAC) |
| 10-14 | Major | Defence-Level (JROC/IAC) |
| 5-9 | Moderate | TLB-Level (delegated) |
| 1-4 | Minor | TLB-Level (delegated) |
The 8 AI Lifecycle Phases
- Planning - AI strategy, algorithm development roadmap, data governance
- Requirements - Performance specifications with hazard analysis
- Architecture - System architecture with traceability and failure protections
- Algorithm Design - Algorithm decisions with verification methods
- Model Development - Train and evaluate model with risk understanding
- Verification & Validation - Demonstrate performance across realistic scenarios
- Integration & Use - Deploy with monitoring and human oversight
- Quality Assurance - Independent assessment and continuous improvement
Output
GeneratesARC-{PROJECT_ID}-JSP936-v{VERSION}.md with:
- Executive summary with risk classification and approval pathway
- AI component discovery and mapping
- Ethical risk assessment (5×5 matrix)
- Detailed assessment for all 5 ethical principles
- Lifecycle phase documentation (all 8 phases)
- Performance metrics and testing results
- Deployment readiness assessment
- Independent assurance recommendations
Prerequisites
MANDATORY (warn if missing):- PRIN (Architecture Principles) - AI governance standards, defence tech constraints
- REQ (Requirements) - AI/ML-related FR, NFR (security, safety), data requirements
- RISK (Risk Register) - AI safety risks, operational risks
- AIPB (AI Playbook) - Risk level, human oversight model (if civilian interface)
Unacceptable Risk Criteria
STOP IMMEDIATELY if:- Significant negative impacts are imminent
- Severe harms are occurring
- Catastrophic risks present
- System behaving outside acceptable bounds
AI Component Types
The command identifies and documents:- Machine Learning Models (supervised, unsupervised, reinforcement, deep learning)
- AI Algorithms (decision trees, SVMs, Bayesian networks, expert systems)
- Autonomous Systems (vehicles, drones, robotic systems)
- Decision Support Systems (recommendation engines, risk assessment, predictive analytics)
- Natural Language Processing (chatbots, text classification, translation)
- Computer Vision (object detection, face recognition, image classification)
- Generative AI (LLMs, image generation, synthetic data)
Related Commands
arckit mod-secure- MOD Secure by Design (complementary security assessment)arckit ai-playbook- UK Government AI Playbook (for civilian-facing components)arckit atrs- ATRS record (if system has civilian interface)
Resources
- JSP 936: Dependable Artificial Intelligence in Defence
- MOD Responsible AI Strategy
- Defence AI Centre (DAIC)
- Defence Science and Technology Laboratory (Dstl) AI Guidance