Skip to main content

Quickstart

This guide will help you set up Page Assist and have your first AI conversation in just a few minutes.

Prerequisites

Before you begin, make sure you have:
  • Page Assist installed on your browser (Installation guide)
  • A local AI provider (we’ll use Ollama in this guide)

Setup guide

1

Install Ollama

Ollama is a free, open-source tool that lets you run AI models locally on your computer.
  1. Visit ollama.com and download Ollama for your operating system
  2. Install and launch Ollama
  3. Open a terminal and pull a model:
    ollama pull llama3.2
    
Ollama runs on localhost:11434 by default. Page Assist will automatically detect it - no configuration needed!
Alternative providers:
  • LM Studio - Another popular local AI runtime
  • Chrome AI - Built-in Gemini Nano (Chrome only)
  • Any OpenAI-compatible API endpoint
2

Open Page Assist

You can access Page Assist in two ways:Sidebar mode (recommended for quick access):
  • Press Ctrl+Shift+Y (or Cmd+Shift+Y on Mac)
  • Or right-click on any webpage and select Page Assist from the context menu
Web UI mode (for full-screen experience):
  • Press Ctrl+Shift+L (or Cmd+Shift+L on Mac)
  • Or click the Page Assist icon in your browser toolbar
You can customize these keyboard shortcuts from your browser’s extension management page.
3

Configure settings (optional)

If you’re using Ollama with default settings, Page Assist will automatically detect it. If you need to configure a different provider:
  1. Click the Settings icon in Page Assist
  2. Go to the OpenAI Compatible API tab
  3. Click Add Provider
  4. Select your provider type (Ollama, LM Studio, etc.)
  5. Enter the API URL:
    • Ollama: http://localhost:11434
    • LM Studio: http://localhost:1234/v1
  6. Click Save
Page Assist will automatically fetch available models from your provider.
4

Start your first chat

Now you’re ready to chat with your AI!
  1. In the Page Assist interface, you’ll see a model selector at the top
  2. Select your preferred model (e.g., llama3.2)
  3. Type a message in the input box at the bottom
  4. Press Enter or click the send button
Your AI will respond based on the model you’ve selected.Try these example prompts:
  • “Explain quantum computing in simple terms”
  • “Write a Python function to calculate fibonacci numbers”
  • “What are the key differences between React and Vue?”

Keyboard shortcuts

Page Assist includes several keyboard shortcuts to boost your productivity:

Extension shortcuts

ShortcutActionDescription
Ctrl+Shift+YOpen sidebarOpens the sidebar on any webpage
Ctrl+Shift+LOpen Web UIOpens the Web UI in a new tab

Application shortcuts

ShortcutActionDescription
Ctrl+Shift+ONew chatStarts a new chat conversation
Ctrl+BToggle sidebarOpens/closes the chat history sidebar
Shift+EscFocus inputFocuses the message input field
Ctrl+EToggle chat modeToggles between normal chat and chat with current page
Extension shortcuts can be customized from your browser’s extension management page. Application shortcuts work within the Page Assist interface.

Advanced features

Chat with webpage

One of Page Assist’s most powerful features is the ability to chat with the content of any webpage:
  1. Open the sidebar on any webpage (Ctrl+Shift+Y)
  2. Toggle the Chat with Website option (or press Ctrl+E)
  3. Ask questions about the page content
Example use cases:
  • “Summarize this article”
  • “What are the main points discussed on this page?”
  • “Explain this code in simpler terms”

Knowledge base

Upload your own documents and chat with them:
  1. Open Page Assist settings
  2. Configure an embedding model in RAG Settings (recommended: nomic-embed-text)
  3. Go to Manage Knowledge
  4. Click Add New Knowledge and upload your files
  5. In the chat input, click the knowledge base icon to select which documents to reference
Supported formats: PDF, DOCX, TXT, CSV, MD
Knowledge base processing happens entirely in your browser. Large documents may impact performance.

Troubleshooting

Can’t connect to Ollama?

If Page Assist can’t connect to Ollama:
  1. Make sure Ollama is running (check your system tray/menu bar)
  2. Verify Ollama is accessible by visiting http://localhost:11434 in your browser
  3. Check that at least one model is installed: ollama list
  4. See the connection issues guide for more help

Model not appearing?

  1. Ensure the model is loaded in your AI provider
  2. Refresh the model list by reopening Page Assist
  3. For LM Studio, make sure a model is actively loaded before opening Page Assist

Next steps

Explore features

Learn about custom prompts, internet search, and more

Configure providers

Set up multiple AI providers and models

Browser support

Check browser-specific features and limitations

Keyboard shortcuts

Master all available keyboard shortcuts

Build docs developers (and LLMs) love