Skip to main content

BAML Functions

BAML functions are the fundamental building blocks of BAML. They define AI-powered functions that call LLMs and return structured, type-safe outputs.

What BAML Functions Do

Think of BAML functions as type-safe wrappers around LLM API calls. When you write a BAML function, the BAML compiler generates real code (Python, TypeScript, Ruby, Go, etc.) that:
  1. Takes your typed input parameters
  2. Renders the prompt using Jinja templating
  3. Sends a request to the specified LLM provider
  4. Parses the response into your defined output type
  5. Returns a validated, type-safe object

Anatomy of a BAML Function

Here’s a complete example showing all the key parts:
BAML
class Resume {
  name string
  education Education[] @description("Extract in the same order listed")
  skills string[] @description("Only include programming languages")
}

class Education {
  school string
  degree string
  year int
}

function ExtractResume(resume_text: string) -> Resume {
  client GPT4o
  
  prompt #"
    Parse the following resume and return a structured representation.

    Resume:
    ---
    {{ resume_text }}
    ---

    {# special macro to print the output schema + instructions #}
    {{ ctx.output_format }}

    JSON:
  "#
}
A BAML function consists of:
  1. Function signature: Name, input parameters, and return type
  2. Client declaration: Which LLM provider and model to use
  3. Prompt template: Using Jinja syntax to construct the prompt

Function Signature

function ExtractResume(resume_text: string) -> Resume
  • Name: ExtractResume - becomes a callable function in your code
  • Parameters: Typed inputs like resume_text: string
  • Return type: The structured output type (Resume class)
Parameters can be:
  • Primitive types: string, int, float, bool
  • Custom classes: user: User
  • Arrays: messages: Message[]
  • Multimodal types: image, audio, pdf, video
  • Enums: category: Category

Client Declaration

client GPT4o
Specifies which LLM to use. Can be:
  • A named client defined elsewhere: client GPT4o
  • A shorthand provider/model: client "openai/gpt-4o"
  • Multiple clients with strategies (fallback, retry, etc.)
See Clients for more details.

Prompt Template

prompt #"
  Parse the following resume...
  {{ resume_text }}
  {{ ctx.output_format }}
"#
The prompt uses Jinja templating syntax:
  • {{ variable }} - Insert variables
  • {% for ... %} - Loops
  • {% if ... %} - Conditionals
  • {{ ctx.output_format }} - Special macro that injects the output schema
See Prompts for the complete syntax.

Using Generated Functions

BAML generates a baml_client in your target language. Here’s how to call the function:
from baml_client import b
from baml_client.types import Resume

def main():
    resume_text = """John Doe
    Python, Rust, C++
    University of California, Berkeley
    B.S. Computer Science, 2020
    """

    # Call the function - returns a validated Resume object
    resume = b.ExtractResume(resume_text)
    
    # Fully type-checked!
    assert isinstance(resume, Resume)
    print(f"Name: {resume.name}")
    print(f"Skills: {', '.join(resume.skills)}")
Never modify code inside baml_client - it’s auto-generated. Changes will be overwritten.

Viewing the Rendered Prompt

BAML’s VSCode extension provides full transparency. You can see:
  1. Prompt Preview: The fully rendered prompt after Jinja processing
  2. Raw cURL Request: The exact API request sent to the LLM
This makes debugging and optimization much easier than working with hidden prompts.

Advanced Function Features

Streaming

Every BAML function automatically gets a streaming variant:
stream = b.stream.ExtractResume(resume_text)

# Get partial results as they arrive
for partial in stream:
    print(f"Parsed {len(partial.skills or [])} skills so far...")

# Get the final validated result
final = stream.get_final_response()
See the Streaming guide for details.

Multiple Return Types

Functions can return unions for tool/function calling patterns:
class ReplyTool {
  response string
}

class StopTool {
  action "stop" @description("End the conversation")
}

function ChatAgent(message: Message[], tone: "happy" | "sad") -> StopTool | ReplyTool {
  client "openai/gpt-4o-mini"
  prompt #"
    Be a {{ tone }} bot.
    {{ ctx.output_format }}
    
    {% for m in message %}
    {{ _.role(m.role) }}
    {{ m.content }}
    {% endfor %}
  "#
}

Complex Inputs

Functions can accept rich input types:
class Message {
  role string
  content string
}

function ClassifyConversation(messages: Message[]) -> Category {
  client GPT4o
  prompt #"
    Classify this conversation:
    
    {% for msg in messages %}
    {{ _.role(msg.role) }}
    {{ msg.content }}
    {% endfor %}
    
    {{ ctx.output_format }}
  "#
}

Design Philosophy

BAML functions follow these principles:
  1. Separation of concerns: AI logic (BAML files) is separate from application logic
  2. Type safety: All inputs and outputs are validated at compile time
  3. Transparency: You always see the full prompt and API request
  4. Language agnostic: Write once in BAML, use in any supported language

Next Steps

Types and Schemas

Learn about BAML’s type system

Prompts

Master Jinja templating in prompts

Clients

Configure LLM providers

Testing

Test your functions

Build docs developers (and LLMs) love