Skip to main content
Agents in Iqra AI are highly configurable entities that combine AI intelligence with deterministic control. This guide covers all configuration options available when building your agent.

Agent structure

Each agent consists of several configuration sections:
BusinessAppAgent
├── General          # Name, emoji, description
├── Context          # Business data visibility
├── Personality      # AI character and behavior
├── Utterances       # Opening and closing messages
├── Interruptions    # Turn-taking behavior
├── KnowledgeBase    # RAG configuration
├── Integrations     # LLM, STT, TTS providers
├── Cache            # Performance optimization
└── Settings         # Background audio and misc

General settings

Basic agent metadata displayed in the UI.
Emoji
string
default:"🤖"
Visual identifier for the agent (single emoji)
Name
object
required
Multi-language agent name
{
  "en": "Customer Support Agent",
  "ar": "وكيل دعم العملاء"
}
Description
object
Multi-language description of agent purpose

Context settings

Control which business data is automatically injected into the agent’s system prompt.
UseBranding
boolean
default:"true"
Include company branding information
UseBranches
boolean
default:"true"
Include branch locations and details
UseServices
boolean
default:"true"
Include service catalog information
UseProducts
boolean
default:"true"
Include product catalog information
Disable context features that aren’t relevant to reduce token usage and improve response latency.

Personality

Define your agent’s character and behavioral guidelines. All fields support multi-language configuration.
Name
object
The agent’s persona name (e.g., “Sarah the Support Specialist”)
Role
object
What the agent does
{
  "en": "A helpful customer service representative for an e-commerce company"
}
Capabilities
object
List of what the agent can do
{
  "en": [
    "Answer questions about products and pricing",
    "Process returns and exchanges",
    "Schedule delivery appointments"
  ]
}
Ethics
object
Behavioral guidelines and restrictions
{
  "en": [
    "Never share customer data with third parties",
    "Always verify identity before processing refunds",
    "Escalate to human for complex complaints"
  ]
}
Tone
object
Communication style directives
{
  "en": [
    "Professional but friendly",
    "Patient with confused customers",
    "Concise responses under 3 sentences"
  ],
  "ar": [
    "محترم ومهذب",
    "صبور مع العملاء",
    "إجابات مختصرة"
  ]
}
Well-defined personality traits dramatically improve conversation quality. Be specific about tone, capabilities, and ethical boundaries.

Utterances

Configure opening and closing messages.
OpeningType
enum
How the conversation begins:
  • None - Wait for user to speak first
  • Static - Predefined greeting
  • Dynamic - AI-generated based on context
OpeningMessage
object
Multi-language static opening (if OpeningType is Static)
{
  "en": "Hello! Thank you for calling. How may I assist you today?"
}
ClosingMessage
object
Multi-language closing message

Interruptions

Configure turn-taking and barge-in behavior. See Interruptions for detailed configuration.
UseTurnByTurnMode
boolean
default:"false"
Enable strict turn-taking (no barge-in allowed)
TurnEnd
object
required
Configuration for detecting when user has finished speaking
PauseTrigger
object
Optional: Pause agent speech when user starts talking
Verification
object
Optional: Use LLM to verify if interruption is intentional

Knowledge base

Configure RAG (Retrieval Augmented Generation) for your agent.
Enabled
boolean
default:"false"
Enable knowledge base integration
SearchStrategy
enum
When to retrieve knowledge:
  • OnEveryQuery - Search on every user message
  • OnDemand - Only when AI requests it via tool
  • Hybrid - Combination approach
TopK
integer
default:"5"
Number of relevant chunks to retrieve
ScoreThreshold
number
Minimum similarity score (0.0 - 1.0)
Refinement
object
Optional: Use LLM to refine and summarize retrieved chunks

Integrations

Connect your AI service providers. Iqra AI follows a “Bring Your Own Model” architecture.
LLM
object
required
Language model configurationSupported providers:
  • OpenAI (GPT-4, GPT-3.5)
  • Azure OpenAI
  • Anthropic (Claude)
  • Google (Gemini)
  • Groq
  • Custom endpoints
STT
object
required
Speech-to-Text configurationSupported providers:
  • Deepgram
  • Azure Speech
  • Google Speech
  • AssemblyAI
TTS
object
required
Text-to-Speech configurationSupported providers:
  • ElevenLabs
  • Azure Speech
  • Google TTS
  • OpenAI TTS
  • PlayHT
Different languages may require different STT/TTS providers for optimal quality. Configure language-specific integrations for best results.

Cache

Optimize performance with intelligent caching.

Audio caching

AutoCacheAudio.Enabled
boolean
default:"false"
Automatically cache repeated TTS outputs
AutoCacheAudio.MinRepetitions
integer
default:"3"
Cache after N identical generations

Embeddings caching

AutoCacheEmbeddings.Enabled
boolean
default:"false"
Cache vector embeddings for knowledge base
Enable audio caching for static responses like greetings and confirmations to reduce latency and API costs.

Settings

Background audio

S3 link to background music/ambience file
BackgroundAudioVolume
integer
Volume level (0-100)

References

Agents can be deployed to multiple channels:
  • InboundRoutingReferences - Phone numbers for SIP inbound calls
  • TelephonyCampaignReferences - Outbound calling campaigns
  • WebCampaignReferences - WebRTC/WebSocket deployments
These are managed automatically when you assign agents to routes and campaigns.

Configuration best practices

1

Start minimal

Begin with basic personality and required integrations. Add complexity as needed.
2

Test personality changes

Small changes in tone or ethics can dramatically affect behavior. Test thoroughly.
3

Match integrations to language

Use native-language models when possible (e.g., Azure for Arabic, Deepgram for English).
4

Monitor costs

Knowledge base and LLM verification add API calls. Balance quality with budget.
5

Version control descriptions

Keep agent descriptions updated as capabilities evolve.

Next steps

Visual IDE

Build conversation scripts visually

Interruptions

Configure advanced turn-taking

Integrations

Connect AI service providers

Build docs developers (and LLMs) love