Skip to main content

Initialization Overview

Solace Agent Mesh provides multiple ways to initialize a new project:
  • GUI Mode (Recommended): Browser-based configuration interface
  • Interactive CLI Mode: Step-by-step command-line prompts
  • Non-Interactive Mode: Automated setup with command-line options

GUI Initialization

The easiest way to get started is using the web-based initialization interface:
1

Launch GUI Initialization

From your project directory, run:
sam init --gui
This will start a web server on port 5002 and open your browser automatically.
2

Configure Broker Settings

Choose one of three broker options:
  • Existing Solace Broker: Connect to a running Solace PubSub+ broker
  • New Local Container: Launch a new Solace broker using Docker/Podman
  • Dev Mode: Run all components in a single process (development only)
3

Configure LLM Settings

Provide your LLM configuration:
  • Endpoint URL: API endpoint (e.g., https://api.openai.com/v1)
  • API Key: Your provider’s API key
  • Planning Model: Model for complex reasoning (e.g., openai/gpt-4o)
  • General Model: Model for general tasks (e.g., openai/gpt-3.5-turbo)
4

Configure Orchestrator Agent

Set up your main orchestrator:
  • Agent Name: Name for your orchestrator (default: OrchestratorAgent)
  • Streaming Support: Enable real-time streaming responses
  • Artifact Service: Choose storage backend (filesystem, memory, S3, GCS)
  • Session Service: Configure session persistence (memory, SQL)
5

Optional: Web UI Gateway

Configure the chat interface:
  • Enable/Disable: Add a web UI for interacting with agents
  • Port: Default is 8000
  • Bot Name: Customize the chatbot name
  • Welcome Message: Set initial greeting
6

Review and Create

Review all settings and click “Initialize Project”. The system will:
  • Create directory structure
  • Generate configuration files
  • Create .env file with your settings
  • Set up database files (if using SQL)

Interactive CLI Initialization

For a traditional command-line experience:
sam init
You’ll be prompted for:
  1. Broker Configuration
    Which broker type do you want to use?
      1) Existing Solace Pub/Sub+ broker
      2) New local Solace broker container (requires Podman or Docker)
      3) Run in 'dev mode' (all-in-one process, not for production)
    
  2. LLM Configuration
    • LLM Service Endpoint URL
    • LLM Service API Key (hidden input)
    • Planning Model Name
    • General Model Name
  3. Project Settings
    • Namespace (default: solace_app/)
    • Agent name
    • Artifact and session service configuration
The interactive mode allows you to review and modify each setting before proceeding.

Non-Interactive (Skip) Mode

For automated deployments or CI/CD pipelines, use skip mode with command-line options:
sam init --skip \
  --broker-type 1 \
  --broker-url ws://your-broker:8008 \
  --broker-vpn your-vpn \
  --broker-username your-user \
  --broker-password your-password \
  --llm-service-endpoint https://api.openai.com/v1 \
  --llm-service-api-key sk-... \
  --llm-service-planning-model-name openai/gpt-4o \
  --llm-service-general-model-name openai/gpt-3.5-turbo \
  --namespace my_project/ \
  --agent-name MyOrchestrator
In skip mode, all required values must be provided via CLI options. Missing values will cause initialization to fail.

Generated Project Structure

After initialization, your project will have the following structure:
my-sam-project/
├── .env                          # Environment variables
├── configs/
│   ├── shared_config.yaml        # Shared configuration (broker, models)
│   ├── logging_config.yaml       # Logging configuration
│   ├── agents/
│   │   └── main_orchestrator.yaml  # Main orchestrator config
│   ├── gateways/
│   │   └── webui.yaml            # Web UI gateway (if enabled)
│   └── services/
│       └── platform.yaml         # Platform service config
├── src/                          # Custom agent code
└── data/                         # SQLite databases (if using SQL)
    ├── orchestrator.db
    └── webui_gateway.db

Configuration Files

shared_config.yaml

Contains shared settings used across all components:
shared_config:
  - broker_connection: &broker_connection
      dev_mode: ${SOLACE_DEV_MODE, false}
      broker_url: ${SOLACE_BROKER_URL}
      broker_username: ${SOLACE_BROKER_USERNAME}
      broker_password: ${SOLACE_BROKER_PASSWORD}
      broker_vpn: ${SOLACE_BROKER_VPN}

  - models:
      planning: &planning_model
        model: ${LLM_SERVICE_PLANNING_MODEL_NAME}
        api_base: ${LLM_SERVICE_ENDPOINT}
        api_key: ${LLM_SERVICE_API_KEY}
        parallel_tool_calls: true

      general: &general_model
        model: ${LLM_SERVICE_GENERAL_MODEL_NAME}
        api_base: ${LLM_SERVICE_ENDPOINT}
        api_key: ${LLM_SERVICE_API_KEY}

main_orchestrator.yaml

Defines the orchestrator agent configuration:
apps:
  - name: orchestrator_app
    app_module: solace_agent_mesh.agent.sac.app
    broker:
      <<: *broker_connection

    app_config:
      namespace: ${NAMESPACE}
      agent_name: "OrchestratorAgent"
      model: *planning_model
      
      artifact_service:
        type: filesystem
        base_path: /tmp/samv2
        artifact_scope: namespace
      
      session_service:
        type: sql
        database_url: ${ORCHESTRATOR_DATABASE_URL}
        default_behavior: PERSISTENT

CLI Options Reference

Broker Options

OptionDescriptionDefault
--broker-typeBroker type: 1/solace, 2/container, 3/dev1
--broker-urlBroker WebSocket URLws://localhost:8008
--broker-vpnMessage VPN namedefault
--broker-usernameBroker usernamedefault
--broker-passwordBroker passworddefault
--container-engineContainer engine: docker or podmanAuto-detect
--dev-modeEnable dev mode (shortcut for --broker-type 3)false

LLM Options

OptionDescription
--llm-service-endpointLLM API endpoint URL
--llm-service-api-keyLLM API key
--llm-service-planning-model-nameModel for planning tasks
--llm-service-general-model-nameModel for general tasks

Agent Options

OptionDescriptionDefault
--agent-nameOrchestrator agent nameOrchestratorAgent
--supports-streamingEnable streamingtrue
--artifact-service-typeStorage: memory, filesystem, s3, gcsfilesystem
--artifact-service-base-pathFilesystem storage path/tmp/samv2
--artifact-service-scopeScope: namespace, app, customnamespace
--session-service-typeSession storage: memory, sql, vertex_ragsql
--session-service-behaviorBehavior: PERSISTENT, RUN_BASEDPERSISTENT

Web UI Options

OptionDescriptionDefault
--add-webui-gatewayAdd Web UI gatewaytrue
--webui-fastapi-hostWeb UI host127.0.0.1
--webui-fastapi-portWeb UI HTTP port8000
--webui-fastapi-https-portWeb UI HTTPS port8443
--webui-frontend-bot-nameBot display nameSolace Agent Mesh
--webui-frontend-welcome-messageWelcome messageHow can I assist you today?

Advanced Configuration

S3 Artifact Storage

To use S3 for artifact storage:
sam init --gui
# Or via CLI:
sam init --skip \
  --artifact-service-type s3 \
  --artifact-service-bucket-name my-bucket \
  --artifact-service-region us-east-1 \
  --artifact-service-endpoint-url https://s3.amazonaws.com  # Optional for AWS
Then set environment variables:
AWS_ACCESS_KEY_ID=your-key
AWS_SECRET_ACCESS_KEY=your-secret

PostgreSQL for Sessions

To use PostgreSQL instead of SQLite:
sam init
# When prompted for database:
# Choose "postgresql" and provide connection string
# Format: postgresql://user:password@host:5432/dbname

OAuth Authentication for LLM

For LLM providers requiring OAuth 2.0:
# In shared_config.yaml, use oauth models:
model: *oauth_planning_model

Post-Initialization

After initialization completes:
1

Review Generated Files

Check the .env file and YAML configurations to ensure all settings are correct.
2

Test the Configuration

Run your agent mesh:
sam run
3

Access the Web UI

Open http://localhost:8000 in your browser to interact with your agents.

Next Steps

Build docs developers (and LLMs) love