Overview
This example shows how to integrate Google’s Gemini AI with OpenSandbox using the@google/gemini-cli npm package. Execute Gemini AI queries in a secure, isolated sandbox environment.
Prerequisites
- OpenSandbox server running locally or remotely
- Docker with the code-interpreter image
- Google Gemini API key
- Python with
uvpackage manager
Setup
1. Pull the Code Interpreter Image
The code-interpreter image includes Node.js for running the Gemini CLI:2. Start OpenSandbox Server
Initialize and start the server:Implementation
Installation
Install the OpenSandbox Python SDK:Code Example
Complete implementation for running Gemini inside a sandbox:Environment Variables
Configure the integration using these environment variables:| Variable | Required | Default | Description |
|---|---|---|---|
SANDBOX_DOMAIN | No | localhost:8080 | Sandbox service address |
SANDBOX_API_KEY | No | - | API key for authentication (optional for local) |
SANDBOX_IMAGE | No | opensandbox/code-interpreter:v1.0.1 | Docker image to use |
GEMINI_API_KEY | Yes | - | Your Google Gemini API key |
GEMINI_BASE_URL | No | - | Custom API endpoint (e.g., for proxies) |
GEMINI_MODEL | No | gemini-2.5-flash | Model to use |
Running the Example
Set your environment variables and run:How It Works
- Sandbox Creation: Spins up an isolated container with Node.js
- Environment Injection: Securely passes Gemini API credentials
- CLI Installation: Installs the Gemini CLI via npm
- Query Execution: Runs Gemini commands and captures responses
- Cleanup: Terminates the sandbox after execution
Key Features
- Isolated Execution: Gemini runs in a secure container
- Flexible Configuration: Support for custom endpoints and models
- Real-time Logging: Access to stdout, stderr, and error streams
- Async Architecture: Built with Python asyncio for performance
Use Cases
- AI-powered code assistance in isolated environments
- Safe testing of AI-generated code
- Automated content generation and analysis
- Building AI workflows with Google’s latest models
Model Options
You can use different Gemini models by setting theGEMINI_MODEL environment variable:
gemini-2.5-flash(default) - Fast, efficient responsesgemini-2.5-pro- More capable, higher quality outputs- Other models as available from Google AI