Skip to main content

Complete Upload Workflow

This guide walks through the entire process of uploading files from the frontend to S3 and handling them on the backend.

Step 1: Frontend Upload

Initiate the upload from your frontend application:
<input type="file" id="file" ref="file" @change="uploadFile">
<div v-if="uploading">
  <progress :value="uploadProgress" max="100"></progress>
  <span>{{ uploadProgress }}%</span>
</div>
const uploadFile = (e) => {
    const file = e.target.files[0];

    s3m(file, {
        progress: progress => {
            uploadProgress.value = progress;
        }
    }).then((response) => {
        axios.post('/api/profile-photo', {
            uuid: response.uuid,
            key: response.key,
            bucket: response.bucket,
            name: file.name,
            content_type: file.type,
        })
    });
};
All uploaded files are initially placed in the tmp/ directory of your S3 bucket. This directory should be configured to automatically purge files older than 24 hours.

Step 2: Backend Acknowledgment

Receive the upload notification and process the file:
use Illuminate\Http\Request;
use Illuminate\Support\Facades\Storage;

public function storeProfilePhoto(Request $request)
{
    $validated = $request->validate([
        'uuid' => 'required|uuid',
        'key' => 'required|string',
        'name' => 'required|string',
        'content_type' => 'required|string',
    ]);

    // Move file from tmp to permanent location
    $permanentKey = str_replace('tmp/', 'profile-photos/', $validated['key']);
    
    Storage::copy($validated['key'], $permanentKey);

    // Update user record
    $request->user()->update([
        'profile_photo_path' => $permanentKey,
    ]);

    // Clean up tmp file
    Storage::delete($validated['key']);

    return response()->json([
        'message' => 'Profile photo updated',
        'url' => Storage::url($permanentKey),
    ]);
}

Step 3: Moving from Tmp Directory

Files uploaded to S3 are stored in tmp/ by default. Move them to permanent storage:

Simple Move

use Illuminate\Support\Facades\Storage;

Storage::copy(
    $request->input('key'),
    str_replace('tmp/', '', $request->input('key'))
);

Organized Storage

Organize files into specific directories:
// Original: tmp/9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d
// New: documents/2024/03/9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d

$uuid = $request->input('uuid');
$year = date('Y');
$month = date('m');

$newKey = "documents/{$year}/{$month}/{$uuid}";

Storage::copy($request->input('key'), $newKey);

Preserve Original Filename

$uuid = $request->input('uuid');
$originalName = $request->input('name');
$extension = pathinfo($originalName, PATHINFO_EXTENSION);

$newKey = "uploads/{$uuid}.{$extension}";

Storage::copy($request->input('key'), $newKey);
Always delete the temporary file after copying to avoid storage costs:
Storage::delete($request->input('key'));

Step 4: Storing File Metadata

Store information about the uploaded file in your database:
use Illuminate\Database\Migrations\Migration;
use Illuminate\Database\Schema\Blueprint;
use Illuminate\Support\Facades\Schema;

return new class extends Migration
{
    public function up()
    {
        Schema::create('uploads', function (Blueprint $table) {
            $table->id();
            $table->foreignId('user_id')->constrained()->cascadeOnDelete();
            $table->uuid('uuid')->unique();
            $table->string('filename');
            $table->string('mime_type');
            $table->string('s3_key');
            $table->unsignedBigInteger('size')->nullable();
            $table->string('visibility')->default('private');
            $table->timestamps();
        });
    }

    public function down()
    {
        Schema::dropIfExists('uploads');
    }
};

Complete End-to-End Example

<template>
  <div class="upload-container">
    <input 
      type="file" 
      @change="handleFileUpload" 
      :disabled="uploading"
    />
    
    <div v-if="uploading" class="progress-bar">
      <progress :value="progress" max="100"></progress>
      <p>{{ progress }}% uploaded</p>
    </div>

    <div v-if="uploadedFile">
      <p>File uploaded: {{ uploadedFile.filename }}</p>
      <a :href="uploadedFile.url" target="_blank">View File</a>
    </div>
  </div>
</template>

<script setup>
import { ref } from 'vue';
import axios from 'axios';

const uploading = ref(false);
const progress = ref(0);
const uploadedFile = ref(null);

const handleFileUpload = async (event) => {
  const file = event.target.files[0];
  if (!file) return;

  uploading.value = true;
  progress.value = 0;
  uploadedFile.value = null;

  try {
    // Upload to S3
    const response = await s3m(file, {
      progress: (percent) => {
        progress.value = percent;
      },
      visibility: 'private',
    });

    // Acknowledge upload on backend
    const { data } = await axios.post('/api/files', {
      uuid: response.uuid,
      key: response.key,
      name: response.name,
      content_type: file.type,
      size: file.size,
    });

    uploadedFile.value = data;
  } catch (error) {
    console.error('Upload failed:', error);
    alert('Upload failed. Please try again.');
  } finally {
    uploading.value = false;
  }
};
</script>

Automatic Cleanup of Tmp Files

Configure S3 lifecycle rules to automatically delete old temporary files:
{
  "Rules": [
    {
      "Id": "DeleteTmpFilesAfter24Hours",
      "Status": "Enabled",
      "Filter": {
        "Prefix": "tmp/"
      },
      "Expiration": {
        "Days": 1
      },
      "NoncurrentVersionExpiration": {
        "NoncurrentDays": 1
      },
      "AbortIncompleteMultipartUpload": {
        "DaysAfterInitiation": 1
      }
    }
  ]
}
This configuration:
  • Deletes files in tmp/ after 1 day
  • Aborts incomplete multipart uploads after 1 day
  • Applies to all files with the tmp/ prefix

Best Practices

Always validate on the backendEven though the frontend sends file information, always validate:
  • File existence in S3
  • User permissions
  • File size limits
  • Allowed MIME types
Clean up temporary filesAlways delete the tmp/ file after copying to permanent storage to avoid unnecessary storage costs.

Security Checklist

  • ✅ Verify user authorization before accepting uploads
  • ✅ Validate file types on both frontend and backend
  • ✅ Implement file size limits
  • ✅ Sanitize filenames before storage
  • ✅ Use private visibility by default
  • ✅ Implement rate limiting on upload endpoints
  • ✅ Configure S3 bucket policies appropriately

Error Handling

public function store(Request $request)
{
    try {
        // Validation
        $validated = $request->validate([...]);

        // Check file exists
        if (!Storage::exists($validated['key'])) {
            return response()->json([
                'error' => 'File not found'
            ], 404);
        }

        // Check file size
        $fileSize = Storage::size($validated['key']);
        if ($fileSize > 100 * 1024 * 1024) { // 100MB
            Storage::delete($validated['key']);
            return response()->json([
                'error' => 'File too large'
            ], 413);
        }

        // Move and save
        $permanentKey = str_replace('tmp/', 'uploads/', $validated['key']);
        Storage::copy($validated['key'], $permanentKey);
        
        $upload = Upload::create([...]);
        
        Storage::delete($validated['key']);

        return response()->json($upload);

    } catch (\Exception $e) {
        // Clean up on error
        if (isset($validated['key'])) {
            Storage::delete($validated['key']);
        }
        
        logger()->error('Upload failed', [
            'error' => $e->getMessage(),
            'user' => auth()->id(),
        ]);

        return response()->json([
            'error' => 'Upload processing failed'
        ], 500);
    }
}

Build docs developers (and LLMs) love