Skip to main content
Import multiple tasks at once from CSV data. This endpoint allows you to bulk create tasks by uploading structured CSV content.

Endpoint

POST /api/importcsv

Request Body

file
array
required
An array of CSV rows, where each row is an array of values. The first row (header) is automatically skipped.

CSV Format

The CSV data should have the following column structure:
file[n][0]
any
Column 0 - Not used (typically ID or row number)
file[n][1]
string
required
Column 1 - Task name/title
file[n][2]
string
required
Column 2 - Due date in ISO 8601 format (e.g., "2026-03-15T10:00:00Z")
file[n][3]
string
required
Column 3 - Task status (e.g., "pending", "in-progress", "completed")
file[n][4]
string
required
Column 4 - Additional information/notes
file[n][5]
string
required
Column 5 - Is finished ("true" or "false")
file[n][6]
string
required
Column 6 - Task column/category

Response

success
string
Returns "Successful!" when all tasks are imported successfully.
error
string
Returns "Invalid!" if the import fails.

CSV Example

Here’s an example CSV file:
ID,Task,Due Date,Status,Notes,Finished,Column
1,Wedding photoshoot - Johnson family,2026-03-15T10:00:00Z,pending,Outdoor ceremony at Central Park,false,1
2,Portrait session - Smith,2026-03-20T14:00:00Z,in-progress,Family of 4 studio session,false,2
3,Product photography - ABC Corp,2026-03-25T09:00:00Z,completed,50 product shots white background,true,3

Example Request

curl -X POST http://localhost:5173/api/importcsv \
  -H "Content-Type: application/json" \
  -d '{
    "file": [
      ["ID", "Task", "Due Date", "Status", "Notes", "Finished", "Column"],
      ["1", "Wedding photoshoot", "2026-03-15T10:00:00Z", "pending", "Outdoor ceremony", "false", "1"],
      ["2", "Portrait session", "2026-03-20T14:00:00Z", "in-progress", "Studio session", "false", "2"]
    ]
  }'

Response Example

"Successful!"

Error Response

"Invalid!"

Behavior Notes

The first row of the CSV (header row) is automatically skipped using file.file.shift(). Make sure your CSV includes a header row.
The is_finished field is converted from string to boolean: "true" becomes true, anything else becomes false.
Tasks are created sequentially, not in parallel. For large CSV files, this may take some time. The import is atomic - if any task fails, subsequent tasks will not be created.
After successfully importing tasks, emit a database-change event via Socket.io to notify other clients. See Real-time Updates for details.

Data Mapping

The endpoint maps CSV columns to database fields as follows:
{
  dueAt: new Date(item[2]),              // Column 2
  task: item[1],                         // Column 1
  additional_information: item[4],       // Column 4
  taskColumn: item[6],                   // Column 6
  is_finished: item[5] === 'true',       // Column 5
  status: item[3]                        // Column 3
}

Preparing CSV Files

From Spreadsheet Software

When exporting from Excel, Google Sheets, or similar:
  1. Arrange columns in the correct order (ID, Task, Due Date, Status, Notes, Finished, Column)
  2. Format dates as ISO 8601 strings
  3. Use “true” or “false” for the Finished column
  4. Export as CSV

Programmatic CSV Generation

import { stringify } from 'csv-stringify/sync';

const tasks = [
  ['ID', 'Task', 'Due Date', 'Status', 'Notes', 'Finished', 'Column'],
  ['1', 'Wedding shoot', '2026-03-15T10:00:00Z', 'pending', 'Central Park', 'false', '1'],
  ['2', 'Portrait session', '2026-03-20T14:00:00Z', 'in-progress', 'Studio', 'false', '2']
];

const csvText = stringify(tasks);
// Now you can save this to a file or use it with importCsvFile()

Error Scenarios

Common reasons for import failure:
  • Invalid date format in column 2
  • Missing required columns
  • CSV not properly formatted as nested arrays
  • Database connection issues
  • Constraint violations (though unlikely with the current schema)

Performance Considerations

Sequential processing: Tasks are created one at a time in a loop. For large imports (1000+ rows), consider:
  • Breaking the import into smaller batches
  • Using Prisma’s createMany() for better performance (requires code modification)
  • Showing progress feedback to users

Alternative: Batch Import

For better performance with large datasets, you could modify the endpoint to use createMany:
// More efficient approach (not currently implemented)
await prisma.tasks.createMany({
  data: query,
  skipDuplicates: true
});

Complete Import Workflow

// 1. Let user select CSV file
const input = document.createElement('input');
input.type = 'file';
input.accept = '.csv';

input.onchange = async (e) => {
  const file = e.target.files[0];
  
  // 2. Read file contents
  const text = await file.text();
  
  // 3. Parse CSV
  const records = parse(text, { skip_empty_lines: true });
  
  // 4. Validate data
  if (records.length < 2) {
    alert('CSV must contain at least one data row');
    return;
  }
  
  // 5. Import to PhotoFlow
  const response = await fetch('/api/importcsv', {
    method: 'POST',
    headers: { 'Content-Type': 'application/json' },
    body: JSON.stringify({ file: records })
  });
  
  const result = await response.json();
  
  // 6. Handle result
  if (result === 'Successful!') {
    alert(`Imported ${records.length - 1} tasks successfully`);
    
    // 7. Notify other clients
    socket.emit('database-change', { type: 'csv-imported' });
  } else {
    alert('Import failed');
  }
};

input.click();

Source Reference

Implementation: src/routes/api/(exportimport)/importcsv/+server.ts:5

Build docs developers (and LLMs) love