Prerequisites
- Ollama installed on your system
- At least one model downloaded in Ollama
- Sufficient RAM for your chosen model (typically 8GB+ recommended)
Installation
If you haven’t installed Ollama yet:Download Ollama
Visit ollama.ai and download the installer for your operating system.
Download a Model
Open your terminal and download a model:Popular models include:
llama3.2- Meta’s latest Llama modelmistral- Mistral 7Bphi3- Microsoft’s Phi-3qwen2.5- Alibaba’s Qwen model
Default Configuration
Page Assist automatically detects Ollama running on the default address:Custom Ollama URL
If you’re running Ollama on a different port or remote server:Multiple Ollama Instances
You can connect to multiple Ollama instances simultaneously:Model Selection
Page Assist automatically detects all models available in your Ollama instance.Viewing Available Models
Models appear in the model selector dropdown. Page Assist filters out embedding-only models (likenomic-embed-text) from the chat model list.
Setting a Default Model
To set a default model:- Open Settings
- Find “Default Model” configuration
- Select your preferred model from the dropdown
- Optionally disable “Ask for model selection every time” to always use the default
Model Nicknames
You can assign custom names to models for easier identification:- Navigate to model management in Settings
- Select a model
- Enter a custom nickname
- The nickname will appear in the model selector
Disabling Models
To hide specific models from the selector:- Go to Settings > Models
- Find the model you want to hide
- Toggle it off
- The model won’t appear in model selection but remains in Ollama
Embedding Models
Page Assist automatically identifies embedding models for RAG (Retrieval-Augmented Generation) features:- Knowledge base search
- Document similarity
- RAG chat features
Connection Troubleshooting
Ollama Not Detected
If Page Assist can’t connect to Ollama:Verify Ollama is Running
Check if Ollama is running:If this fails, start Ollama using your system’s application launcher.
Check the URL
Ensure the URL in Page Assist settings matches your Ollama address. The default is:Note: Page Assist automatically converts
localhost to 127.0.0.1.Models Not Appearing
If models don’t show up:- Verify models are downloaded:
ollama list - Refresh the Page Assist interface
- Check if Ollama is enabled in Settings
- Ensure models aren’t manually disabled in model management
Performance Issues
For better performance:- Use quantized models (e.g.,
llama3.2:q4_0) - Close other resource-intensive applications
- Consider using smaller models (7B or 3B parameter models)
- Ensure adequate RAM for your model size
Advanced Configuration
Custom Model Parameters
You can customize model behavior through Ollama’s Modelfile:Remote Ollama Setup
To expose Ollama for remote access:Best Practices
- Keep Models Updated: Regularly check for model updates using
ollama pull <model> - Monitor Resources: Watch RAM usage when running large models
- Use Appropriate Sizes: Match model size to your hardware capabilities
- Leverage Multiple Models: Keep different models for different tasks (coding, chat, etc.)
- Clean Up Unused Models: Remove models you don’t use to save disk space:
ollama rm <model>
Next Steps
- Explore Knowledge Base features with embedding models
- Learn about custom prompts
- Set up additional providers