Skip to main content

Overview

LLM Magic supports multiple AI providers including OpenAI, Anthropic, Google Gemini, Mistral, OpenRouter, and TogetherAI. All models are accessed through a unified interface.

Listing Available Models

Get all registered models:
use Mateffy\Magic;

$models = Magic::models();
// Returns: Collection<string, string>
// Example: ['openai/gpt-4o' => 'OpenAI GPT-4o', ...]

Default Model

Get or set the default model:
// Get default model name
$modelName = Magic::defaultModelName();
// Returns: 'google/gemini-2.0-flash-lite' (from config)

// Get default model label
$label = Magic::defaultModelLabel();
// Returns: 'Google Gemini 2.0 Flash Lite'

// Get default model instance
$model = Magic::defaultModel();
// Returns: LLM instance
Configure in config/llm-magic.php:
return [
    'llm' => [
        'default' => env('LLM_MAGIC_DEFAULT_MODEL', 'google/gemini-2.0-flash-lite'),
    ],
];

OpenAI Models

use Mateffy\Magic;
use Mateffy\Magic\Models\OpenAI;

// Using model string
Magic::chat()
    ->model('openai/gpt-4o')
    ->prompt('Hello!')
    ->stream();

// Using static factory
Magic::chat()
    ->model(OpenAI::gpt_4o())
    ->prompt('Hello!')
    ->stream();
API Key: Set OPENAI_API_KEY in your .env filePrivacy: Data may be used for model training and abuse prevention

Anthropic Models

use Mateffy\Magic\Models\Anthropic;

Magic::chat()
    ->model('anthropic/claude-3-5-sonnet-latest')
    ->prompt('Explain quantum computing')
    ->stream();

// Or use constants
Magic::chat()
    ->model(Anthropic::sonnet_3_5())
    ->stream();
API Key: Set ANTHROPIC_API_KEY in your .env filePricing: See getModelCost() method for per-token costs

Google Gemini Models

use Mateffy\Magic\Models\Gemini;

Magic::chat()
    ->model('google/gemini-2.0-flash-lite')
    ->prompt('Analyze this image')
    ->stream();

// Factory methods
Magic::chat()
    ->model(Gemini::flash_2_lite())
    ->stream();
API Key: Set GOOGLE_API_KEY in your .env fileFree Tier: Available with rate limits for testing

Mistral Models

use Mateffy\Magic\Models\Mistral;

Magic::chat()
    ->model('mistral/mistral-large-latest')
    ->stream();
Mistral models are limited to 8 images per call. See Mistral Vision docs.

OpenRouter Models

Access multiple providers through OpenRouter:
use Mateffy\Magic\Models\OpenRouter;

Magic::chat()
    ->model('openrouter/x-ai/grok-beta')
    ->stream();

// Direct model access
Magic::chat()
    ->model(OpenRouter::grok())
    ->stream();

// Use any OpenRouter model
Magic::chat()
    ->model(OpenRouter::model('anthropic/claude-3-opus'))
    ->stream();
API Key: Set OPENROUTER_API_KEY in your .env fileFlexibility: Access any model from OpenRouter’s catalog

TogetherAI Models

Access open-source models through TogetherAI:
use Mateffy\Magic\Models\TogetherAI;

// Using model string
Magic::chat()
    ->model('togetherai/meta-llama/Meta-Llama-3.1-8B-Instruct-Turbo')
    ->stream();

// Using constants
Magic::chat()
    ->model(new TogetherAI(TogetherAI::META_LLAMA_3_1_8B_INSTRUCT_TURBO))
    ->stream();
API Key: Set TOGETHERAI_API_KEY in your .env fileFeatures: Open-source models, vision support, competitive pricing

Model Options

Customize model behavior with options:
use Mateffy\Magic\Models\OpenAI;
use Mateffy\Magic\Models\Options\ChatGptOptions;

$options = new ChatGptOptions(
    temperature: 0.7,
    maxTokens: 2000,
    topP: 0.9,
);

Magic::chat()
    ->model(new OpenAI('gpt-4o', $options))
    ->stream();

Streaming vs Send

Choose between streaming and batch responses:
// Streaming - receive tokens as they're generated
$messages = Magic::chat()
    ->model('gpt-4o')
    ->prompt('Write a story')
    ->stream();  // Returns MessageCollection

// Send - wait for complete response
$messages = Magic::chat()
    ->model('gpt-4o')
    ->prompt('Write a story')
    ->send();  // Returns MessageCollection

Model Costs

Check pricing for a model:
use Mateffy\Magic\Models\OpenAI;

$model = OpenAI::gpt_4o();
$cost = $model->getModelCost();

// Returns ModelCost instance with:
// - inputCentsPer1K
// - outputCentsPer1K

Creating Model Instances

Multiple ways to instantiate models:
// 1. String notation (recommended)
Magic::chat()->model('openai/gpt-4o')

// 2. Static factory method
Magic::chat()->model(OpenAI::gpt_4o())

// 3. Direct instantiation
Magic::chat()->model(new OpenAI('gpt-4o'))

// 4. From string (internal)
use Mateffy\Magic\Models\ElElEm;
$model = ElElEm::fromString('openai/gpt-4o');

Model Capabilities

Check what features a model supports:
use Mateffy\Magic\Models\Mistral;

$model = new Mistral(Mistral::PIXTRAL_LARGE);

if ($model instanceof HasMaximumImageCount) {
    $maxImages = $model->getMaximumImageCount();
    // Returns: 8 for Mistral models
}

Provider Comparison

OpenAI

Strengths: General purpose, reliableModels: GPT-4o, o1, o3-miniBest for: Production applications

Anthropic

Strengths: Long context, reasoningModels: Claude 3.5 Sonnet, OpusBest for: Analysis, coding

Google

Strengths: Multimodal, fastModels: Gemini 2.0 Flash, 2.5 ProBest for: Vision, speed

Mistral

Strengths: Cost-effective, EuropeanModels: Mistral Large, CodestralBest for: Code, efficiency

TogetherAI

Strengths: Open-source models, visionModels: Llama 3.1/3.2, Qwen, MixtralBest for: OSS models, research

Build docs developers (and LLMs) love