System Requirements
- macOS: Sonoma (v14) or newer
- Processor: Apple M series (CPU and GPU support) or Intel x86 (CPU only)
Installation
Quick Install
Install Ollama using the install script:Manual Download
Alternatively, download Ollama.dmg and install:Drag to Applications
Drag the Ollama application to your system-wide
Applications folder.Installing to the system
Applications folder is recommended for automatic PATH configuration.Using Ollama
Once installed, Ollama runs in the background and theollama command is available in Terminal.
Run Your First Model
Check Running Models
View Installed Models
Configuration
File Locations
Ollama stores files in the following locations:| Location | Purpose |
|---|---|
~/.ollama | Models and configuration |
~/.ollama/logs | Log files |
~/.ollama/logs/app.log | GUI application logs |
~/.ollama/logs/server.log | Server logs |
<install-location>/Ollama.app/Contents/Resources/ollama | CLI binary |
Environment Variables
Set environment variables usinglaunchctl:
Common Environment Variables
| Variable | Description | Default |
|---|---|---|
OLLAMA_HOST | Server bind address | 127.0.0.1:11434 |
OLLAMA_MODELS | Model storage location | ~/.ollama/models |
OLLAMA_DEBUG | Enable debug logging | 0 |
OLLAMA_NUM_PARALLEL | Max parallel requests | 1 |
OLLAMA_KEEP_ALIVE | Model keep-alive duration | 5m |
Changing Model Storage Location
If your home directory doesn’t have enough space for large models:Custom Installation Location
To install Ollama somewhere other than the systemApplications folder:
Apple Silicon (M-series) Benefits
Ollama is optimized for Apple Silicon:- Native ARM64: Built specifically for M1, M2, M3, and M4 chips
- Unified Memory: Efficient GPU acceleration using shared memory architecture
- Metal Support: Hardware-accelerated inference via Metal Performance Shaders
- Neural Engine: Automatic optimization for Apple’s Neural Engine
Updates
Ollama automatically checks for updates. When an update is available:- Click the Ollama icon in the menu bar
- Select “Restart to update”
Disable Auto-Start on Login
If you prefer Ollama not to start automatically:- Open System Settings
- Search for “Login Items”
- Find Ollama under “Allow in the Background”
- Click the toggle to disable
Logs and Debugging
View Application Logs
View Server Logs
Enable Debug Mode
Real-time Log Monitoring
Troubleshooting
Ollama Command Not Found
Ifollama is not recognized in Terminal:
-
Check if the symlink exists:
-
If missing, create it manually:
-
Verify
/usr/local/binis in your PATH:
Ollama Not Starting
-
Check if the process is running:
-
Try launching from Terminal for error output:
-
Check system logs:
Insufficient Storage Space
Models can require tens to hundreds of GB:-
Check available space:
-
Change model storage location:
- Restart Ollama
Performance Issues
- Close other applications: Free up RAM for model inference
- Check Activity Monitor: Look for high CPU/memory usage
- Try smaller models: Start with models like
gemma2:2borllama3.2:1b - Update macOS: Ensure you’re running the latest version for best Metal performance
Uninstallation
To completely remove Ollama from your Mac:API Access
Ollama’s REST API is available athttp://localhost:11434:
Next Steps
Quickstart Guide
Get started with your first model
Model Library
Browse available models
API Reference
Integrate Ollama into your apps
CLI Reference
Master the command-line interface