Skip to main content

System Requirements

  • macOS: Sonoma (v14) or newer
  • Processor: Apple M series (CPU and GPU support) or Intel x86 (CPU only)

Installation

Quick Install

Install Ollama using the install script:
curl -fsSL https://ollama.com/install.sh | sh

Manual Download

Alternatively, download Ollama.dmg and install:
1

Mount the DMG

Double-click the downloaded Ollama.dmg file.
2

Drag to Applications

Drag the Ollama application to your system-wide Applications folder.
Installing to the system Applications folder is recommended for automatic PATH configuration.
3

Launch Ollama

Open Ollama from your Applications folder.On first launch, Ollama will verify the CLI is in your PATH and prompt for permission to create a symlink in /usr/local/bin if needed.

Using Ollama

Once installed, Ollama runs in the background and the ollama command is available in Terminal.

Run Your First Model

ollama run gemma3

Check Running Models

ollama ps

View Installed Models

ollama list

Configuration

File Locations

Ollama stores files in the following locations:
LocationPurpose
~/.ollamaModels and configuration
~/.ollama/logsLog files
~/.ollama/logs/app.logGUI application logs
~/.ollama/logs/server.logServer logs
<install-location>/Ollama.app/Contents/Resources/ollamaCLI binary

Environment Variables

Set environment variables using launchctl:
launchctl setenv OLLAMA_HOST "0.0.0.0:11434"
Then restart the Ollama application.

Common Environment Variables

VariableDescriptionDefault
OLLAMA_HOSTServer bind address127.0.0.1:11434
OLLAMA_MODELSModel storage location~/.ollama/models
OLLAMA_DEBUGEnable debug logging0
OLLAMA_NUM_PARALLELMax parallel requests1
OLLAMA_KEEP_ALIVEModel keep-alive duration5m

Changing Model Storage Location

If your home directory doesn’t have enough space for large models:
launchctl setenv OLLAMA_MODELS "/Volumes/External/ollama-models"
Restart the Ollama application after changing the location.

Custom Installation Location

To install Ollama somewhere other than the system Applications folder:
1

Place the Application

Move Ollama.app to your desired location.
2

Add CLI to PATH

Ensure the CLI binary or a symlink is in your PATH:
sudo ln -s /your/custom/path/Ollama.app/Contents/Resources/ollama /usr/local/bin/ollama
3

Launch and Decline Move Prompt

On first start, decline the “Move to Applications?” request.

Apple Silicon (M-series) Benefits

Ollama is optimized for Apple Silicon:
  • Native ARM64: Built specifically for M1, M2, M3, and M4 chips
  • Unified Memory: Efficient GPU acceleration using shared memory architecture
  • Metal Support: Hardware-accelerated inference via Metal Performance Shaders
  • Neural Engine: Automatic optimization for Apple’s Neural Engine

Updates

Ollama automatically checks for updates. When an update is available:
  1. Click the Ollama icon in the menu bar
  2. Select “Restart to update”
Or download the latest version manually from ollama.com/download.

Disable Auto-Start on Login

If you prefer Ollama not to start automatically:
  1. Open System Settings
  2. Search for “Login Items”
  3. Find Ollama under “Allow in the Background”
  4. Click the toggle to disable
This setting persists across updates.

Logs and Debugging

View Application Logs

cat ~/.ollama/logs/app.log

View Server Logs

cat ~/.ollama/logs/server.log

Enable Debug Mode

launchctl setenv OLLAMA_DEBUG "1"
Restart the Ollama application, then check logs for detailed output.

Real-time Log Monitoring

tail -f ~/.ollama/logs/server.log

Troubleshooting

Ollama Command Not Found

If ollama is not recognized in Terminal:
  1. Check if the symlink exists:
    ls -l /usr/local/bin/ollama
    
  2. If missing, create it manually:
    sudo ln -s /Applications/Ollama.app/Contents/Resources/ollama /usr/local/bin/ollama
    
  3. Verify /usr/local/bin is in your PATH:
    echo $PATH
    

Ollama Not Starting

  1. Check if the process is running:
    ps aux | grep ollama
    
  2. Try launching from Terminal for error output:
    /Applications/Ollama.app/Contents/MacOS/Ollama
    
  3. Check system logs:
    log show --predicate 'process == "Ollama"' --last 5m
    

Insufficient Storage Space

Models can require tens to hundreds of GB:
  1. Check available space:
    df -h ~
    
  2. Change model storage location:
    launchctl setenv OLLAMA_MODELS "/Volumes/External/models"
    
  3. Restart Ollama

Performance Issues

  • Close other applications: Free up RAM for model inference
  • Check Activity Monitor: Look for high CPU/memory usage
  • Try smaller models: Start with models like gemma2:2b or llama3.2:1b
  • Update macOS: Ensure you’re running the latest version for best Metal performance

Uninstallation

To completely remove Ollama from your Mac:
sudo rm -rf /Applications/Ollama.app
sudo rm /usr/local/bin/ollama
rm -rf ~/Library/Application\ Support/Ollama
rm -rf ~/Library/Saved\ Application\ State/com.electron.ollama.savedState
rm -rf ~/Library/Caches/com.electron.ollama
rm -rf ~/Library/Caches/ollama
rm -rf ~/Library/WebKit/com.electron.ollama
rm -rf ~/.ollama

API Access

Ollama’s REST API is available at http://localhost:11434:
curl http://localhost:11434/api/generate -d '{
  "model": "gemma3",
  "prompt": "Why is the sky blue?",
  "stream": false
}'
See the API documentation for complete reference.

Next Steps

Quickstart Guide

Get started with your first model

Model Library

Browse available models

API Reference

Integrate Ollama into your apps

CLI Reference

Master the command-line interface

Build docs developers (and LLMs) love