Skip to main content
The NomicEmbeddings class provides integration with Nomic’s embedding models, supporting both text and image embeddings.

Installation

pip install langchain-nomic

Setup

Set your Nomic API key:
export NOMIC_API_KEY="your-api-key"

Usage

Basic usage

from langchain_nomic import NomicEmbeddings

embed = NomicEmbeddings(
    model="nomic-embed-text-v1.5"
)

Embed single text

text = "The meaning of life is 42"
vector = embed.embed_query(text)
print(len(vector))

Embed multiple texts

texts = ["hello world", "goodbye world"]
vectors = embed.embed_documents(texts)
print(len(vectors))

Configuration

Supported models

Text models:
  • nomic-embed-text-v1.5 - Latest text embedding model (768 dimensions)
  • nomic-embed-text-v1 - Previous generation text model
Vision models:
  • nomic-embed-vision-v1.5 - Latest vision embedding model

Inference modes

Nomic supports three inference modes:

Remote inference (default)

embed = NomicEmbeddings(
    model="nomic-embed-text-v1.5",
    inference_mode="remote"
)

Local inference

Run embeddings locally using Embed4All:
embed = NomicEmbeddings(
    model="nomic-embed-text-v1.5",
    inference_mode="local",
    device="cpu"  # or "gpu"
)

Dynamic inference

Automatically choose between local and remote:
embed = NomicEmbeddings(
    model="nomic-embed-text-v1.5",
    inference_mode="dynamic"
)

Matryoshka dimensions

Reduce embedding dimensions for faster search:
embed = NomicEmbeddings(
    model="nomic-embed-text-v1.5",
    dimensionality=256  # Reduce from 768 to 256
)

Image embeddings

from langchain_nomic import NomicEmbeddings

embed = NomicEmbeddings(
    model="nomic-embed-text-v1.5",
    vision_model="nomic-embed-vision-v1.5"
)

# Embed images by URI
image_uris = [
    "https://example.com/image1.jpg",
    "file:///path/to/image2.png"
]
vectors = embed.embed_image(image_uris)

Device selection (local mode)

Do not use the device parameter on macOS.
For local inference on Linux/Windows:
embed = NomicEmbeddings(
    model="nomic-embed-text-v1.5",
    inference_mode="local",
    device="nvidia"  # or "cpu", "gpu", "amd"
)

Parameters

model
string
required
Name of the Nomic text embedding model to use.
nomic_api_key
string
Nomic API key. Automatically inferred from NOMIC_API_KEY environment variable if not provided.
dimensionality
integer
Embedding dimension for Matryoshka-capable models. Defaults to full-size (768).
inference_mode
string
default:"remote"
How to generate embeddings. Options: remote, local, or dynamic.
device
string
Device for local embeddings. Options: cpu, gpu, nvidia, amd, or specific device name.
vision_model
string
Vision model to use for image embeddings.

Build docs developers (and LLMs) love