GDS Service Standard Assessment Preparation
The/arckit.service-assessment command helps UK Government teams prepare for GDS Service Standard assessments by analyzing existing ArcKit artifacts as evidence against all 14 points.
What is the Service Standard?
The GDS Service Standard is a set of 14 criteria that UK Government services must meet to achieve alpha, beta, and live assessments. All public-facing services must pass these assessments before launch.Command: /arckit.service-assessment
Usage
PHASE(required):alpha,beta, orlive- The assessment phase to prepare forDATE(optional):YYYY-MM-DD- Planned assessment date for timeline calculations
Output: ARC-{PROJECT_ID}-SVCASS-v1.0.md
Generates a comprehensive Service Standard assessment preparation report.
The 14-Point Service Standard
The assessment analyzes evidence for all 14 points:Section 1: Meeting Usersβ Needs
- Understand users and their needs - Research with diverse users
- Solve a whole problem for users - End-to-end user journeys
- Provide a joined up experience across all channels - Multi-channel consistency
- Make the service simple to use - Usability testing and task completion
- Make sure everyone can use the service - WCAG 2.1 AA accessibility
Section 2: Providing a Good Service
- Have a multidisciplinary team - Sustainable team with right skills
- Use agile ways of working - Iterative delivery and continuous improvement
- Iterate and improve frequently - Regular releases based on feedback
- Create a secure service which protects usersβ privacy - Security and GDPR compliance
- Define what success looks like and publish performance data - KPIs and metrics
Section 3: Using the Right Technology
- Choose the right tools and technology - Build vs buy analysis
- Make new source code open - Open source by default
- Use and contribute to open standards, common components and patterns - GOV.UK Design System
- Operate a reliable service - Uptime, monitoring, incident response
Evidence Mapping
The command automatically maps ArcKit artifacts to Service Standard points:| Service Standard Point | ArcKit Artifacts | Evidence Types |
|---|---|---|
| 1. Understand users | ARC-*-STKE-*.md, ARC-*-REQ-*.md | User research, personas, user stories |
| 2. Solve whole problem | ARC-*-REQ-*.md, wardley-maps/ | End-to-end journeys, integration points |
| 3. Joined up experience | ARC-*-REQ-*.md, diagrams/ | Multi-channel requirements, data consistency |
| 4. Simple to use | ARC-*-REQ-*.md, reviews/ARC-*-HLDR-*.md | Usability NFRs, design review |
| 5. Everyone can use | ARC-*-REQ-*.md, ARC-*-SECD-*.md | WCAG 2.1 AA requirements, accessibility testing |
| 6. Multidisciplinary team | ARC-*-STKE-*.md, ARC-*-PLAN-*.md | RACI matrix, team structure |
| 7. Agile ways | ARC-*-PLAN-*.md | Sprint structure, ceremonies |
| 8. Iterate frequently | reviews/ARC-*-HLDR-*.md, reviews/ARC-*-DLDR-*.md | Design iterations, version history |
| 9. Secure and private | ARC-*-SECD-*.md, ARC-*-DATA-*.md, ARC-*-DPIA-*.md | Security controls, GDPR compliance |
| 10. Success metrics | ARC-*-REQ-*.md, ARC-*-SOBC-*.md | KPIs, benefits realization |
| 11. Right tools | research/, wardley-maps/, ARC-*-TCOP-*.md | Technology research, build vs buy |
| 12. Open source | reviews/ARC-*-HLDR-*.md, ARC-*-TCOP-*.md | Repository links, licensing |
| 13. Open standards | ARC-*-TCOP-*.md, reviews/ARC-*-HLDR-*.md | GOV.UK Design System, API standards |
| 14. Reliable service | ARC-*-REQ-*.md, reviews/ARC-*-HLDR-*.md | Availability NFRs, resilience architecture |
RAG Ratings
Each point receives a RAG (Red/Amber/Green) rating:- π’ Green (Ready): All critical evidence found, no significant gaps, ready for assessment
- π‘ Amber (Partial): Some evidence but gaps remain, 1-2 weeks to address
- π΄ Red (Not Ready): Critical evidence missing, 3+ weeks work required
- π’ Green: 12+ points Green, max 2 Amber, 0 Red
- π‘ Amber: 10+ points Green/Amber, max 2 Red
- π΄ Red: More than 2 Red points
Report Contents
The assessment preparation report includes:-
Executive Summary
- Overall readiness rating and score (X/14)
- Critical gaps requiring immediate action
- Key strengths to showcase
- Recommended timeline
-
Detailed Assessment (for each of 14 points)
- Status: π’/π‘/π΄
- What this point means and why it matters
- Evidence required for this phase
- Evidence found in ArcKit artifacts (with file references)
- Gap analysis
- Recommendations (Critical/High/Medium priority)
- Assessment day guidance
-
Evidence Inventory
- Complete traceability: Service Standard Point β ArcKit Artifacts
- Status and critical gaps table
-
Assessment Preparation Checklist
- Critical actions (0-2 weeks)
- High priority actions (2-4 weeks)
- Medium priority actions (4+ weeks)
-
Assessment Day Preparation
- Timeline and booking guidance
- Documentation to share with panel (1 week before)
- Who should attend (core team and phase-specific roles)
- Show and tell structure (4-hour timeline)
- Tips for success
- Materials to have ready
-
After the Assessment
- If you pass (Green)
- If you get Amber (tracking amber evidence process)
- If you fail (Red) (remediation plan)
Phase-Appropriate Criteria
Alpha Assessment - Focus on demonstrating viability:- Lower bar for operational evidence (monitoring, performance data)
- Higher bar for user research and prototyping
- Critical: User testing, team composition, technology viability
- Optional: Full accessibility audit, published performance data
- Higher bar for everything
- Critical: Working service, security testing, accessibility compliance, performance monitoring
- All 14 points must be addressed substantively
- Highest bar, operational excellence expected
- Critical: Published performance data, user satisfaction, continuous improvement
- Evidence of service evolution based on user feedback
Example Evidence Requirements
Point 1: Understand Users (Alpha)
β Required:- User needs documented from research
- User groups and personas identified
- Prototype testing results with real users
- Evidence of research with diverse user groups
- Analytics data (not expected at alpha)
Point 5: Accessibility (Beta)
β Critical:- WCAG 2.1 AA audit completed and passed
- Testing with screen readers, voice control, magnification
- Testing with disabled users
- Accessibility statement published
Point 10: Performance Data (Live)
β Mandatory:- Performance data published on GOV.UK
- 4 mandatory KPIs: cost per transaction, user satisfaction, completion rate, digital take-up
- Data updated regularly (at least quarterly)
Integration with Other Commands
The assessment preparation works best when combined with:/arckit.tcop- Technology Code of Practice assessment (Points 11, 13 overlap)/arckit.analyze- Comprehensive governance quality analysis/arckit.traceability- Requirements traceability matrix/arckit.secure- Security assessment (Point 9 evidence)/arckit.requirements- If user stories or NFRs weak/arckit.hld-review- Architecture decisions (Point 11)
Resources
Official GDS Guidance:- Service Standard - All 14 points explained
- What happens at a service assessment
- Book a service assessment
- Service Standard Reports - Browse 450+ published assessment reports
Tips for Assessment Day
Do:- β Show real work, not polished presentations
- β Have doers present their work
- β Be honest about unknowns
- β Explain problem-solving approach
- β Demonstrate iteration based on learning
- β Reference ArcKit artifacts by name
- β Over-prepare presentations (panel wants artifacts)
- β Hide problems or pretend everything is perfect
- β Use jargon or assume panel knows your context
- β Let senior leaders dominate (panel wants doers)
- β Argue with panel feedback
- β Rush through - panel will interrupt with questions
Re-running for Progress Tracking
Re-run the command weekly as you address gaps:- Detect new evidence added since last run
- Update RAG ratings based on current state
- Recalculate overall readiness score
- Adjust recommendations based on progress