Skip to main content

Healthcare AI Security

How a healthcare provider secured their AI medical assistant while maintaining HIPAA compliance.

Challenge

A hospital network deployed an AI assistant to help doctors with:
  • Patient history summarization
  • Differential diagnosis suggestions
  • Medical literature references
  • Treatment plan recommendations
Critical Requirements
  • HIPAA compliance for all patient data
  • Zero tolerance for data leakage
  • High accuracy (medical decisions at stake)
  • Audit trail for all AI interactions

Solution

import { Koreshield } from 'koreshield-sdk';
import OpenAI from 'openai';

const koreshield = new Koreshield({
  apiKey: process.env.KORESHIELD_API_KEY,
  sensitivity: 'high',
  complianceMode: 'hipaa',
});

async function secureMedicalQuery(
  doctorId: string,
  patientId: string,
  query: string
) {
  // Scan query for prompt injection
  const scan = await koreshield.scan({
    content: query,
    userId: doctorId,
    metadata: {
      patientId,
      department: 'emergency',
      complianceLevel: 'hipaa',
    },
  });

  if (scan.threat_detected) {
    await auditLog.create({
      doctorId,
      patientId,
      action: 'QUERY_BLOCKED',
      reason: scan.threat_type,
      timestamp: new Date(),
    });

    return {
      error: 'Security threat detected in query',
      auditId: await generateAuditId(),
    };
  }

  // Retrieve patient context with access control
  const patientContext = await getPatientContext(patientId, doctorId);

  // Generate medical response
  const response = await openai.chat.completions.create({
    model: 'gpt-4',
    messages: [
      {
        role: 'system',
        content: `You are a medical AI assistant. 
CRITICAL RULES:
- Only reference THIS patient's data (ID: ${patientId})
- Do not diagnose - provide differential suggestions only
- Always recommend consulting specialists
- Cite medical literature when possible
- Flag contradictions or drug interactions`,
      },
      {
        role: 'user',
        content: `Patient Context:\n${patientContext}\n\nQuery: ${query}`,
      },
    ],
    temperature: 0.2, // Low temperature for medical accuracy
  });

  // Audit successful interaction
  await auditLog.create({
    doctorId,
    patientId,
    action: 'QUERY_PROCESSED',
    queryHash: hashQuery(query),
    timestamp: new Date(),
  });

  return {
    response: response.choices[0].message.content,
    disclaimer: 'AI-generated suggestion. Verify with medical literature.',
  };
}

HIPAA Compliance

PHI Protection

// Remove PHI from logs
function sanitizePHI(text: string): string {
  return text
    .replace(/\b\d{3}-\d{2}-\d{4}\b/g, '[SSN]')
    .replace(/\b[A-Z][a-z]+ [A-Z][a-z]+\b/g, '[NAME]')
    .replace(/\b\d{10}\b/g, '[PHONE]')
    .replace(/\b[\w.-]+@[\w.-]+\.\w+\b/g, '[EMAIL]');
}

// Audit all access
await auditLog.create({
  userId: doctorId,
  action: 'PATIENT_DATA_ACCESS',
  patientId,
  query: sanitizePHI(query),
  ipAddress: req.ip,
  timestamp: new Date(),
});

Architecture

Security Layers

1

Authentication

Multi-factor authentication for all medical staff
2

Authorization

Role-based access control (RBAC) - doctors only access assigned patients
3

Threat Detection

KoreShield scans all queries for prompt injection and data exfiltration
4

Data Minimization

AI receives only necessary patient data, never full records
5

Audit Trail

Complete logging of all AI interactions for HIPAA compliance

Results

Zero PHI Breaches

18 months of operation with no data leakage incidents

Blocked Attacks

487 prompt injection attempts detected and blocked

100% Audit Trail

Complete compliance with HIPAA audit requirements

Low Latency

<100ms latency for scans with 99.97% uptime

Deployment Checklist

  • Complete HIPAA risk assessment
  • Sign Business Associate Agreement (BAA) with KoreShield
  • Configure PHI masking in all logs
  • Set up role-based access control (RBAC)
  • Establish audit log retention policy (minimum 6 years)
  • Train staff on AI assistant usage and limitations
  • Enable high sensitivity scanning
  • Configure HIPAA compliance mode
  • Set up automated threat alerts
  • Implement data minimization policies
  • Configure encryption at rest and in transit
  • Enable activity monitoring and alerting
  • Monitor audit logs daily
  • Review blocked queries weekly
  • Conduct security assessments quarterly
  • Update policies based on new threats
  • Maintain incident response procedures
  • Generate compliance reports monthly

Best Practices

Medical AI Safety Guidelines
  1. Never trust AI for diagnoses - Use as decision support only
  2. Always verify suggestions - Cross-reference with medical literature
  3. Maintain human oversight - Every AI interaction should be reviewed
  4. Log everything - Complete audit trails are essential for compliance
  5. Minimize data exposure - Only provide AI with necessary patient context
  6. Regular security reviews - Threat landscape evolves constantly

Incident Response

async function handleMedicalSecurityIncident(
  scan: ScanResult,
  doctorId: string,
  patientId: string
) {
  // Immediate actions
  await Promise.all([
    // Block the query
    logBlockedQuery(scan, doctorId, patientId),
    
    // Alert security team
    sendSecurityAlert({
      severity: 'high',
      type: scan.threat_type,
      doctorId,
      patientId,
    }),
    
    // Notify compliance officer
    notifyComplianceOfficer({
      incident: 'AI_THREAT_DETECTED',
      details: sanitizePHI(scan),
    }),
  ]);
  
  // Investigate if repeated attempts
  const recentAttempts = await countRecentThreats(doctorId, '1h');
  
  if (recentAttempts > 2) {
    // Temporarily revoke AI access
    await revokeAIAccess(doctorId, { duration: '24h' });
    
    // Require security review before restoration
    await createSecurityReviewTicket(doctorId);
  }
}

HIPAA Compliance

Complete HIPAA guide

RAG Security

Secure retrieval systems

Financial Services

Similar compliance requirements

Build docs developers (and LLMs) love