Skip to main content

Overview

Memori uses entity IDs and process IDs to segment memories across users and workflows. This ensures each user has isolated, personalized memories.

Attribution Basics

from memori import Memori
from openai import OpenAI

client = OpenAI()
mem = Memori().llm.register(client)

# Set attribution for a specific user and workflow
mem.attribution(
    entity_id="user-123",      # Unique user identifier
    process_id="support-chat"  # Workflow/application identifier
)

# All subsequent LLM calls will use this attribution
response = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "I prefer email notifications"}],
)

Multi-User Web Application

Here’s a complete example of a web application handling multiple users.
from flask import Flask, request, jsonify
from openai import OpenAI
from memori import Memori

app = Flask(__name__)
client = OpenAI()

@app.route('/chat', methods=['POST'])
def chat():
    data = request.json
    user_id = data.get('user_id')
    message = data.get('message')
    session_id = data.get('session_id')

    # Create a new Memori instance for this request
    mem = Memori().llm.register(client)
    mem.attribution(entity_id=user_id, process_id="web-chat")

    # Optionally resume an existing session
    if session_id:
        mem.set_session(session_id)

    # Make LLM call with user-specific memory
    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": message}],
    )

    return jsonify({
        'response': response.choices[0].message.content,
        'session_id': str(mem.config.session_id)
    })

if __name__ == '__main__':
    app.run()

Session Management

Memori uses session IDs to group related conversations. Each session represents a distinct conversation thread.
from memori import Memori
from openai import OpenAI

client = OpenAI()
mem = Memori().llm.register(client)
mem.attribution(entity_id="user-123")

# Start a new conversation (automatic session ID)
response1 = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "I'm planning a trip to Tokyo"}],
)

# Save session ID for later
tokyo_session_id = mem.config.session_id
print(f"Session ID: {tokyo_session_id}")

# Start a completely new conversation
mem.new_session()
response2 = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "I'm planning a trip to Paris"}],
)

# Resume the Tokyo conversation
mem.set_session(tokyo_session_id)
response3 = client.chat.completions.create(
    model="gpt-4o-mini",
    messages=[{"role": "user", "content": "What was I planning?"}],
)
# AI will remember: "You were planning a trip to Tokyo"

Multi-Tenant SaaS Application

For SaaS applications, use both entity and process IDs to segment memories by customer and tenant.
from memori import Memori
from openai import OpenAI

def handle_customer_request(tenant_id: str, customer_id: str, message: str):
    client = OpenAI()
    mem = Memori().llm.register(client)

    # Combine tenant and customer for unique entity ID
    entity_id = f"{tenant_id}:{customer_id}"
    process_id = f"tenant-{tenant_id}"

    mem.attribution(entity_id=entity_id, process_id=process_id)

    response = client.chat.completions.create(
        model="gpt-4o-mini",
        messages=[{"role": "user", "content": message}],
    )

    return response.choices[0].message.content

# Example usage
result = handle_customer_request(
    tenant_id="acme-corp",
    customer_id="alice",
    message="What are my account preferences?"
)

Agent-Based Architecture

Use different process IDs for different agents or workflows.
from memori import Memori
from openai import OpenAI

class SupportAgent:
    def __init__(self, user_id: str):
        self.client = OpenAI()
        self.mem = Memori().llm.register(self.client)
        self.mem.attribution(
            entity_id=user_id,
            process_id="support-agent"
        )

    def respond(self, message: str) -> str:
        response = self.client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[{"role": "user", "content": message}],
        )
        return response.choices[0].message.content

class SalesAgent:
    def __init__(self, user_id: str):
        self.client = OpenAI()
        self.mem = Memori().llm.register(self.client)
        self.mem.attribution(
            entity_id=user_id,
            process_id="sales-agent"
        )

    def respond(self, message: str) -> str:
        response = self.client.chat.completions.create(
            model="gpt-4o-mini",
            messages=[{"role": "user", "content": message}],
        )
        return response.choices[0].message.content

# Each agent maintains separate memories for the same user
support = SupportAgent("user-123")
sales = SalesAgent("user-123")

Key Concepts

The unique identifier for an end-user. This is typically:
  • User UUID from your database
  • Email address (hashed for privacy)
  • Customer ID
Memories are isolated per entity_id.
The identifier for a workflow, application, or agent. This allows you to:
  • Segment memories by use case (support vs sales)
  • Maintain separate contexts for different agents
  • Track memory usage by application
Represents a single conversation thread. Sessions are automatically generated but can be:
  • Manually set to resume conversations
  • Reset to start new threads
  • Stored in your database for persistence

Best Practices

Always Set Entity ID

Never share entity IDs across users. Each user should have a unique identifier.

Use Meaningful Process IDs

Process IDs should describe the workflow (e.g., “onboarding”, “checkout-flow”).

Create Per-Request Instances

In web applications, create a new Memori instance for each request to avoid state leakage.

Store Session IDs

Save session IDs in your database to allow users to resume conversations.

Next Steps

Streaming Responses

Learn how to use Memori with streaming

Async Operations

Handle async memory operations

Build docs developers (and LLMs) love