Skip to main content

Overview

The OpenAIService provides a unified interface for all OpenAI API interactions including chat completions, embeddings generation, function calling with tools, and audio transcription via Whisper.

Class Structure

Constructor

public function __construct(
    $apiKey,
    $model,
    $embeddingModel,
    Logger $logger
)
apiKey
string
required
OpenAI API key for authentication
model
string
required
Chat completion model (e.g., gpt-4o, gpt-4o-mini)
embeddingModel
string
required
Embedding model (e.g., text-embedding-3-small, text-embedding-ada-002)
logger
Logger
required
Logger instance for tracking API calls and errors

Instantiation Example

webhook.php
if ($credentialService && $credentialService->hasOpenAICredentials()) {
    $oaiCreds = $credentialService->getOpenAICredentials();
    $openai = new OpenAIService(
        $oaiCreds['api_key'],
        $oaiCreds['model'],
        $oaiCreds['embedding_model'],
        $logger
    );
    $openaiTemperature = $oaiCreds['temperature'] ?? 0.7;
    $openaiMaxTokens = $oaiCreds['max_tokens'] ?? 500;
} else {
    $openai = new OpenAIService(
        Config::get('openai.api_key'),
        Config::get('openai.model'),
        Config::get('openai.embedding_model'),
        $logger
    );
}

Embeddings Methods

createEmbedding()

Generates a vector embedding for a single text input.
public function createEmbedding($text)
text
string
required
Text to convert into embedding vector
Returns: Array of floats (typically 1536 dimensions for text-embedding-3-small)

Implementation

try {
    $response = $this->client->post('embeddings', [
        'json' => [
            'model' => $this->embeddingModel,
            'input' => $text
        ]
    ]);

    $data = json_decode($response->getBody()->getContents(), true);
    
    if (isset($data['data'][0]['embedding'])) {
        return $data['data'][0]['embedding'];
    }

    throw new \RuntimeException('Invalid embedding response');
} catch (\GuzzleHttp\Exception\ClientException $e) {
    $response = $e->getResponse();
    $body = json_decode($response->getBody()->getContents(), true);
    
    if ($response->getStatusCode() === 429 || 
        (isset($body['error']['code']) && $body['error']['code'] === 'insufficient_quota')) {
        throw new \RuntimeException('INSUFFICIENT_FUNDS');
    }
    
    throw $e;
}

Usage Example

$userQuery = "How do I reset my password?";
$embedding = $openai->createEmbedding($userQuery);
// Returns: [0.0234, -0.0156, 0.0891, ... ] (1536 values)

createBatchEmbeddings()

Generates embeddings for multiple texts, handling errors gracefully.
public function createBatchEmbeddings(array $texts)
texts
array
required
Array of text strings to embed
Returns: Array of embeddings (null for failed items)
public function createBatchEmbeddings(array $texts)
{
    $embeddings = [];
    
    foreach ($texts as $index => $text) {
        try {
            $embeddings[$index] = $this->createEmbedding($text);
        } catch (\Exception $e) {
            $this->logger->error('Batch embedding error for index ' . $index);
            $embeddings[$index] = null;
        }
    }

    return $embeddings;
}
This method processes texts sequentially. For better performance with large batches, consider OpenAI’s native batch embedding endpoint.

Chat Completion Methods

generateResponse()

Generates a chat completion with optional context and conversation history.
public function generateResponse(
    $prompt,
    $context = '',
    $systemPrompt = null,
    $temperature = 0.7,
    $maxTokens = 500,
    $conversationHistory = [],
    $modelOverride = null
)
prompt
string
required
User’s message or query
context
string
default:"''"
Additional context (e.g., from RAG retrieval)
systemPrompt
string
default:"null"
Custom system prompt (defaults to Spanish assistant prompt)
temperature
float
default:"0.7"
Sampling temperature (0.0-2.0)
maxTokens
int
default:"500"
Maximum tokens in response
conversationHistory
array
default:"[]"
Previous messages for context
modelOverride
string
default:"null"
Override the default model for this request
Returns: String containing the AI’s response

Message Construction

$systemMessage = $systemPrompt ?? 
    'Eres un asistente virtual útil y amigable. ' .
    'Responde de manera clara y concisa basándote en el contexto proporcionado.';

$messages = [
    ['role' => 'system', 'content' => $systemMessage]
];

if (!empty($context)) {
    $messages[] = [
        'role' => 'system',
        'content' => "Contexto relevante:\n" . $context
    ];
}

if (!empty($conversationHistory)) {
    foreach ($conversationHistory as $historyMsg) {
        $role = $historyMsg['sender'] === 'bot' ? 'assistant' : 'user';
        $messages[] = [
            'role' => $role,
            'content' => $historyMsg['message_text']
        ];
    }
}

$messages[] = ['role' => 'user', 'content' => $prompt];

Usage Example

webhook.php
// With RAG context
$response = $openai->generateResponse(
    $userMessage,
    $context,
    $systemPrompt,
    $temperature,
    $maxTokens,
    $conversationHistory
);

// Fallback without context
$fallbackResponse = $openai->generateResponse(
    $messageData['text'],
    '',
    $systemPrompt,
    $openaiTemperature,
    $openaiMaxTokens,
    $conversationHistory
);

generateResponseWithTools()

Generates a chat completion with OpenAI function calling capabilities.
public function generateResponseWithTools(
    $prompt,
    $context = '',
    $systemPrompt = null,
    array $tools = [],
    $temperature = 0.7,
    $maxTokens = 500,
    $conversationHistory = []
)
tools
array
required
Array of function definitions in OpenAI tools format
Returns: Array containing the full message object (may include tool_calls)

Implementation

$requestBody = [
    'model' => $this->model,
    'messages' => $messages,
    'temperature' => $temperature,
    'max_tokens' => $maxTokens
];

if (!empty($tools)) {
    $requestBody['tools'] = $tools;
    $requestBody['tool_choice'] = 'auto';
}

$response = $this->client->post('chat/completions', [
    'json' => $requestBody
]);

$data = json_decode($response->getBody()->getContents(), true);

if (isset($data['choices'][0]['message'])) {
    return $data['choices'][0]['message'];
}

Usage with Calendar Tools

$tools = $openai->getCalendarTools();
$message = $openai->generateResponseWithTools(
    $userMessage,
    $context,
    $systemPrompt,
    $tools,
    0.7,
    500,
    $conversationHistory
);

if (isset($message['tool_calls'])) {
    foreach ($message['tool_calls'] as $toolCall) {
        $functionName = $toolCall['function']['name'];
        $arguments = json_decode($toolCall['function']['arguments'], true);
        
        // Handle: schedule_appointment, check_availability, etc.
    }
}

Calendar Tools

getCalendarTools()

Returns predefined function definitions for calendar operations.
public function getCalendarTools()
Returns: Array of tool definitions
{
  "type": "function",
  "function": {
    "name": "schedule_appointment",
    "description": "El usuario quiere agendar, reservar, programar o sacar una cita...",
    "parameters": {
      "type": "object",
      "properties": {
        "date_preference": {
          "type": "string",
          "description": "Fecha o referencia temporal mencionada"
        },
        "time_preference": {
          "type": "string",
          "description": "Hora o rango preferido"
        },
        "service_type": {
          "type": "string",
          "description": "Tipo de servicio"
        },
        "is_confirmed": {
          "type": "boolean",
          "description": "true solo si confirmó explícitamente"
        }
      },
      "required": ["is_confirmed"]
    }
  }
}

Audio Transcription

transcribeAudio()

Transcribes audio files using OpenAI’s Whisper model.
public function transcribeAudio($audioContent, $filename = 'audio.ogg')
audioContent
string
required
Binary audio file content
filename
string
default:"'audio.ogg'"
Filename for the audio (affects format detection)
Returns: String containing transcribed text

Implementation

try {
    $tempFile = sys_get_temp_dir() . '/' . uniqid() . '_' . $filename;
    file_put_contents($tempFile, $audioContent);

    $response = $this->client->post('audio/transcriptions', [
        'headers' => [
            'Authorization' => 'Bearer ' . $this->apiKey
        ],
        'multipart' => [
            [
                'name' => 'file',
                'contents' => fopen($tempFile, 'r'),
                'filename' => $filename
            ],
            [
                'name' => 'model',
                'contents' => 'whisper-1'
            ],
            [
                'name' => 'language',
                'contents' => 'es'
            ]
        ]
    ]);

    $data = json_decode($response->getBody()->getContents(), true);
    unlink($tempFile);

    if (isset($data['text'])) {
        $this->logger->info('Whisper: Audio transcribed', [
            'text_length' => strlen($data['text'])
        ]);
        return $data['text'];
    }
} catch (\Exception $e) {
    $this->logger->error('Whisper Transcription Error: ' . $e->getMessage());
    throw $e;
}

Usage Example

webhook.php
if ($messageData['type'] === 'audio' && isset($messageData['audio_id'])) {
    $audioContent = $whatsapp->downloadMedia($messageData['audio_id']);
    
    $transcription = $openai->transcribeAudio($audioContent, 'audio.ogg');
    
    $messageData['text'] = '[Audio] ' . $transcription;
    $logger->info('Audio transcribed', ['text' => $transcription]);
}
Whisper currently defaults to Spanish ('es'). Modify the language parameter for other languages.

Error Handling

Quota Exceeded Detection

All methods detect and handle OpenAI quota/billing errors:
catch (\GuzzleHttp\Exception\ClientException $e) {
    $response = $e->getResponse();
    $body = json_decode($response->getBody()->getContents(), true);
    
    if ($response->getStatusCode() === 429 || 
        (isset($body['error']['code']) && 
         $body['error']['code'] === 'insufficient_quota')) {
        $this->logger->error('OpenAI Insufficient Funds');
        throw new \RuntimeException('INSUFFICIENT_FUNDS');
    }
    
    throw $e;
}

Webhook Handling

webhook.php
function handleInsufficientFunds($db, $e) {
    if (strpos($e->getMessage(), 'INSUFFICIENT_FUNDS') !== false) {
        $db->query(
            "INSERT INTO settings (setting_key, setting_value) 
             VALUES ('openai_status', 'insufficient_funds') 
             ON DUPLICATE KEY UPDATE setting_value = 'insufficient_funds'",
            []
        );
        $db->query(
            "INSERT INTO settings (setting_key, setting_value) 
             VALUES ('openai_error_timestamp', NOW()) 
             ON DUPLICATE KEY UPDATE setting_value = NOW()",
            []
        );
        return true;
    }
    return false;
}

try {
    $result = $rag->generateResponse(...);
} catch (\Exception $e) {
    handleInsufficientFunds($db, $e);
    $logger->error('RAG Error: ' . $e->getMessage());
}

HTTP Client Configuration

The service uses GuzzleHTTP with custom configuration:
$this->client = new Client([
    'base_uri' => 'https://api.openai.com/v1/',
    'headers' => [
        'Authorization' => 'Bearer ' . $this->apiKey,
        'Content-Type' => 'application/json'
    ],
    'timeout' => 30,
    'verify' => false  // SSL verification disabled
]);
SSL verification is disabled ('verify' => false). Enable it in production for better security:
'verify' => true

Configuration

Configure OpenAI settings in config/config.php:
config/config.php
return [
    'openai' => [
        'api_key' => env('OPENAI_API_KEY'),
        'model' => env('OPENAI_MODEL', 'gpt-4o-mini'),
        'embedding_model' => env('OPENAI_EMBEDDING_MODEL', 'text-embedding-3-small'),
        'temperature' => 0.7,
        'max_tokens' => 500
    ]
];

Best Practices

  • gpt-4o: Best quality, slower, more expensive
  • gpt-4o-mini: Balanced performance and cost (recommended)
  • gpt-3.5-turbo: Fastest and cheapest, lower quality
// Adjust maxTokens based on response type
$shortAnswer = 150;  // For quick replies
$mediumAnswer = 500; // Default
$longAnswer = 1000;  // For detailed explanations
Implement exponential backoff for rate limit errors (429):
$retries = 0;
while ($retries < 3) {
    try {
        return $openai->createEmbedding($text);
    } catch (\Exception $e) {
        if (strpos($e->getMessage(), '429') !== false) {
            sleep(pow(2, $retries));
            $retries++;
        } else {
            throw $e;
        }
    }
}
Provide clear instructions in system prompts:
$systemPrompt = 
    "You are a helpful customer service assistant. " .
    "Always respond in Spanish. " .
    "Base your answers on the provided context. " .
    "If you don't know the answer, say so politely.";

RAG Service

Uses OpenAI for embeddings and response generation

Vector Search

Stores and searches embeddings created by OpenAI

Next Steps

Calendar Integration

Learn about function calling with calendar tools

Audio Messages

Configure Whisper transcription

Build docs developers (and LLMs) love