Skip to main content
Export your job application data as JSON or CSV. All endpoints require authentication.

Export Jobs

curl -X GET "https://api.pipeline.local/api/export?format=json" \
  -H "Authorization: Bearer YOUR_TOKEN" \
  --output pipeline-jobs.json
Export all jobs for the authenticated user in JSON or CSV format.

Authentication

Required. User can only export their own jobs.

Query Parameters

format
string
default:"json"
Export format.Values:
  • json — Structured JSON with metadata
  • csv — Comma-separated values

Response

Response is a downloadable file (not JSON API response). Headers:
  • Content-Type: application/json or text/csv; charset=utf-8
  • Content-Disposition: attachment; filename="pipeline-jobs-YYYY-MM-DD.{json|csv}"

JSON Export Format

{
  "exported_at": "2026-03-04T12:00:00Z",
  "total": 42,
  "jobs": [
    {
      "id": "123e4567-e89b-12d3-a456-426614174000",
      "company_name": "Acme Corp",
      "job_title": "Senior Backend Engineer",
      "status": "interview",
      "source": "linkedin",
      "location": "San Francisco, CA",
      "is_remote": true,
      "job_url": "https://acme.com/careers/senior-backend-engineer",
      "job_description": "Build scalable APIs...",
      "salary_range_min": 180000,
      "salary_range_max": 250000,
      "ai_match_score": 87,
      "ai_reasoning": "Strong match: 5+ years Go experience, distributed systems expertise",
      "notes": "Met recruiter at tech conference",
      "tags": ["high-priority", "backend"],
      "applied_at": "2026-03-01T10:30:00Z",
      "interview_at": "2026-03-10T14:00:00Z",
      "offer_at": null,
      "rejected_at": null,
      "scraped_at": null,
      "created_at": "2026-02-28T15:00:00Z",
      "updated_at": "2026-03-04T09:00:00Z"
    }
  ]
}

CSV Export Format

Columns:
  • id
  • company_name
  • job_title
  • status
  • source
  • location
  • is_remote
  • job_url
  • salary_range_min
  • salary_range_max
  • ai_match_score
  • notes
  • tags (semicolon-separated)
  • applied_at
  • interview_at
  • offer_at
  • rejected_at
  • created_at
  • updated_at
Example:
id,company_name,job_title,status,source,location,is_remote,job_url,salary_range_min,salary_range_max,ai_match_score,notes,tags,applied_at,interview_at,offer_at,rejected_at,created_at,updated_at
123e4567-e89b-12d3-a456-426614174000,Acme Corp,Senior Backend Engineer,interview,linkedin,"San Francisco, CA",true,https://acme.com/careers/123,180000,250000,87,Met recruiter at tech conference,high-priority; backend,2026-03-01T10:30:00Z,2026-03-10T14:00:00Z,,,2026-02-28T15:00:00Z,2026-03-04T09:00:00Z

Excluded Fields

For security and privacy, the following internal fields are excluded from exports:
  • user_id — Internal reference
  • job_description — Potentially large text (CSV only)
  • ai_parsed_data — Internal AI metadata
  • last_synced_at — Internal sync timestamp
  • deleted_at — Soft delete metadata

CSV Special Characters

CSV export properly escapes:
  • Commas — Fields wrapped in quotes if they contain commas
  • Quotes — Escaped as ""
  • Newlines — Preserved within quoted fields
  • Tags — Array values joined with ; (semicolon + space)

Export Limits

Exports are capped at 10,000 jobs to prevent memory exhaustion. This limit is per-user and applies to both JSON and CSV formats.For users with >10k jobs, a streaming export solution is planned for future release.

Performance

  • Typical Response Time: 500ms - 2s (depends on job count)
  • Memory Usage: ~1 MB per 100 jobs (approximate)
  • Timeout: 30 seconds (for very large exports)

Use Cases

JSON Export:
  • Backup/restore
  • Data migration
  • Integration with other tools
  • Analytics in external BI tools
CSV Export:
  • Import into spreadsheet apps (Excel, Google Sheets)
  • Data analysis
  • Reporting
  • Sharing with career counselors

Errors

error
object

Common Errors

  • 400 Validation Error — Invalid format parameter
  • 401 Unauthorized — Missing or invalid auth token
  • 500 Internal Error — Database error or export generation failed

Example Error Response

{
  "error": {
    "code": "VALIDATION_ERROR",
    "message": "format must be \"json\" or \"csv\""
  }
}
HTTP Status: 400 Bad Request

Implementation Details

File Naming Convention

Exports use ISO 8601 date format in filename:
pipeline-jobs-YYYY-MM-DD.{json|csv}
Example: pipeline-jobs-2026-03-04.json

Metadata Included (JSON Only)

JSON exports include:
  • exported_at — ISO 8601 timestamp of export
  • total — Count of exported jobs
  • jobs — Array of job objects

SQL Query

Export fetches:
SELECT 
  id, company_name, job_title, status, source,
  location, is_remote, job_url,
  salary_range_min, salary_range_max,
  ai_match_score, ai_reasoning,
  notes, tags,
  applied_at, interview_at, offer_at, rejected_at,
  scraped_at, created_at, updated_at
FROM jobs
WHERE user_id = $1
  AND deleted_at IS NULL
ORDER BY created_at DESC
LIMIT 10000;

Future Enhancements (Planned)

Phase 3 (Post-MVP):
  • Filtered Exports — Export by date range, status, or source
  • XLSX Format — Excel format with formatting
  • PDF Reports — Formatted reports with charts
  • Email Delivery — Send exports to user’s email
  • Scheduled Exports — Weekly automated backups
  • Streaming — Support for >10k jobs via streaming API

Status: ✅ Live

This endpoint is fully implemented. See app/api/export/route.ts:11.

Tested Scenarios

  • ✅ JSON export with 100+ jobs
  • ✅ CSV export with special characters
  • ✅ Empty export (0 jobs)
  • ✅ Large export (1000+ jobs)
  • ✅ Invalid format parameter
  • ✅ Unauthorized access

Known Limitations

  1. 10k job cap — Streaming solution needed for larger exports
  2. No filtering — Always exports all jobs (planned for future)
  3. Synchronous — Long exports may timeout (streaming will fix)
  4. No compression — Large exports not gzipped (planned for future)

Usage Examples

Download with curl

# JSON export
curl -X GET "https://api.pipeline.local/api/export?format=json" \
  -H "Authorization: Bearer YOUR_TOKEN" \
  --output ~/Downloads/my-jobs.json

# CSV export
curl -X GET "https://api.pipeline.local/api/export?format=csv" \
  -H "Authorization: Bearer YOUR_TOKEN" \
  --output ~/Downloads/my-jobs.csv

JavaScript Fetch

// Trigger download in browser
async function exportJobs(format = 'json') {
  const response = await fetch(
    `https://api.pipeline.local/api/export?format=${format}`,
    {
      headers: {
        'Authorization': `Bearer ${authToken}`,
      },
    }
  );
  
  if (!response.ok) {
    const error = await response.json();
    throw new Error(error.error.message);
  }
  
  // Trigger browser download
  const blob = await response.blob();
  const url = window.URL.createObjectURL(blob);
  const a = document.createElement('a');
  a.href = url;
  a.download = `pipeline-jobs-${new Date().toISOString().split('T')[0]}.${format}`;
  a.click();
  window.URL.revokeObjectURL(url);
}

// Usage
await exportJobs('json');
await exportJobs('csv');

Python

import requests

def export_jobs(auth_token: str, format: str = 'json'):
    """Export jobs to file."""
    response = requests.get(
        'https://api.pipeline.local/api/export',
        params={'format': format},
        headers={'Authorization': f'Bearer {auth_token}'},
    )
    response.raise_for_status()
    
    filename = f'pipeline-jobs-{datetime.now().strftime("%Y-%m-%d")}.{format}'
    with open(filename, 'wb') as f:
        f.write(response.content)
    
    return filename

# Usage
filename = export_jobs(auth_token, format='csv')
print(f'Exported to {filename}')


Support

For issues or feature requests related to export functionality:
  1. Check BACKEND_STATUS.md
  2. Open an issue on GitHub
  3. Contact [email protected]

Build docs developers (and LLMs) love