Skip to main content

The s3m() Function

The s3m() function is globally available after adding the @s3m Blade directive to your layout. It handles multipart uploads to S3 directly from the browser.

Basic Usage

<input type="file" id="file" ref="file" @change="uploadFile">
const uploadFile = (e) => {
    const file = e.target.files[0];

    s3m(file, {
        progress: progress => {
            console.log(`Upload progress: ${progress}%`);
        }
    }).then((response) => {
        console.log('Upload complete!', response);
        // Send metadata to backend
        axios.post('/api/files', {
            uuid: response.uuid,
            key: response.key,
            name: response.name,
        });
    });
};

Configuration Options

The s3m() function accepts a file and an options object:
file
File
required
The file object to upload from an input element
options
object
Configuration options for the upload

Available Options

progress
function
Callback function that receives upload progress percentage (0-100)
progress: (percent) => {
    console.log(`${percent}% complete`);
}
chunk_size
number
default:"10485760"
Size of each chunk in bytes. Default is 10MB (10 * 1024 * 1024)
chunk_size: 5 * 1024 * 1024 // 5MB chunks
max_concurrent_uploads
number
default:"5"
Maximum number of chunks to upload simultaneously
max_concurrent_uploads: 3 // Upload 3 chunks at a time
chunk_retries
number
default:"3"
Number of times to retry a failed chunk upload
chunk_retries: 5 // Retry failed chunks up to 5 times
visibility
string
default:"private"
S3 ACL visibility for the uploaded file. Options: private, public-read, public-read-write, authenticated-read
visibility: 'public-read' // Make file publicly accessible
auto_complete
boolean
default:"true"
Whether to automatically complete the multipart upload. Set to false if you want to complete it manually on the backend
auto_complete: false // Manual completion required
data
object
Additional data to send with the upload request (e.g., folder path, custom metadata)
data: {
    folder: 'documents',
    user_id: 123
}
baseURL
string
Custom base URL for API requests. Useful for different environments
baseURL: 'https://api.example.com'
headers
object
Custom headers to include with API requests
headers: {
    'X-Custom-Header': 'value'
}
httpClient
object
Custom HTTP client (defaults to axios)
httpClient: customAxiosInstance

Response Object

The promise returns a response object with the following properties:
uuid
string
Unique identifier for the uploaded file
key
string
S3 key where the file is stored (e.g., tmp/9b1deb4d-3b7d-4bad-9bdd-2b0d7b3dcb6d)
name
string
Original filename
extension
string
File extension extracted from filename
url
string
S3 URL of the uploaded file (only when auto_complete: true)
upload_id
string
Multipart upload ID (only when auto_complete: false)
parts
array
Array of uploaded parts with ETags (only when auto_complete: false)

Complete Examples

<template>
  <div>
    <input type="file" @change="handleUpload" ref="fileInput" />
    <div v-if="uploading">
      <progress :value="uploadProgress" max="100"></progress>
      <p>{{ uploadProgress }}% uploaded</p>
    </div>
  </div>
</template>

<script setup>
import { ref } from 'vue';
import axios from 'axios';

const uploading = ref(false);
const uploadProgress = ref(0);

const handleUpload = async (event) => {
  const file = event.target.files[0];
  if (!file) return;

  uploading.value = true;
  uploadProgress.value = 0;

  try {
    const response = await s3m(file, {
      progress: (percent) => {
        uploadProgress.value = percent;
      },
      chunk_size: 5 * 1024 * 1024, // 5MB chunks
      visibility: 'private',
    });

    // Send metadata to backend
    await axios.post('/api/files', {
      uuid: response.uuid,
      key: response.key,
      name: response.name,
      extension: response.extension,
    });

    alert('Upload successful!');
  } catch (error) {
    console.error('Upload failed:', error);
    alert('Upload failed. Please try again.');
  } finally {
    uploading.value = false;
  }
};
</script>

Advanced Configuration

Custom Chunk Size for Large Files

For very large files, you may want to increase the chunk size:
s3m(file, {
  chunk_size: 50 * 1024 * 1024, // 50MB chunks for files > 1GB
  max_concurrent_uploads: 10,
})
Larger chunks reduce the number of requests but may be slower on unstable connections. Smaller chunks provide better retry granularity but increase the number of requests.

Custom API Endpoint

If your Laravel app is on a different domain:
s3m(file, {
  baseURL: 'https://api.myapp.com',
  headers: {
    'Authorization': `Bearer ${token}`,
  },
})

Additional Metadata

Pass custom data to your backend during upload initialization:
s3m(file, {
  data: {
    folder: 'invoices',
    category: 'financial',
    year: 2024,
  },
})
Custom data is sent during the multipart upload creation and can be accessed in your backend via the CreateMultipartUploadRequest.

Error Handling

s3m(file, {
  chunk_retries: 5,
  progress: (percent) => {
    console.log(`${percent}%`);
  }
})
.then((response) => {
  console.log('Success:', response);
})
.catch((error) => {
  console.error('Upload failed:', error);
  // Handle error (show user message, retry, etc.)
});
Chunk uploads are automatically retried up to chunk_retries times (default: 3) before failing.

Build docs developers (and LLMs) love