Endpoints
Single Model Prediction
Batch Predictions
Authentication
Include your Spice API key in the request headers:Single Model Prediction
Request
{model_name} with the name of your configured model.
Response Format
Response Fields
| Field | Type | Description |
|---|---|---|
status | string | Prediction status: Success, BadRequest, or InternalError |
model_name | string | Name of the model used |
model_version | string | Version of the model |
prediction | array | Prediction results as an array of floats |
duration_ms | integer | Time taken to complete the prediction in milliseconds |
error_message | string | Error description (only present on failure) |
Batch Predictions
Request Body
Response Format
Response Fields
| Field | Type | Description |
|---|---|---|
duration_ms | integer | Total time for all predictions in milliseconds |
predictions | array | Array of individual prediction results |
Examples
Single Model Prediction
cURL
Python
Node.js
Batch Predictions
cURL
Python
Node.js
Use Cases
A/B Testing
Compare predictions from different model versions:Ensemble Predictions
Combine predictions from multiple models:Prediction Status
Success
Prediction completed successfully:Bad Request (400)
Invalid request or model not found:Internal Error (500)
Server error during prediction:Model Configuration
Models must be configured in your Spicepod before they can be used for inference. Example configuration:Performance Considerations
- Batch predictions: Use batch endpoint for multiple models to reduce network overhead
- Model loading: Models are loaded on startup; first predictions may be slower
- Concurrency: Batch predictions run concurrently for better performance
- Data format: Predictions expect Float32Array results (column ‘y’)
Error Handling
Always check thestatus field in responses: