Skip to main content

Overview

Agentic Hydration is an architectural pattern developed at Quail to address common challenges in LLM-based applications. This two-step approach separates AI decision-making from data processing, resulting in faster, more accurate, and cost-effective AI component generation.
This pattern is central to how Quail generates charts, visualizations, and UI components from user queries and data.

The Problem

While developing Quail’s AI-powered data visualization tools, we encountered several challenges common to LLM-based applications:

High Token Usage

Sending large datasets to LLMs consumes many tokens, leading to increased costs

Slow Response Times

Processing large datasets takes more time, affecting user experience

Data Hallucinations

LLMs may misinterpret or misrepresent data, leading to incorrect outputs

What is Agentic Hydration?

Agentic Hydration is a two-step pattern for component generation:
1

Generate Scaffold

Leverage the structured properties of your data to reduce the information the model must directly process. Create a scaffold that describes the component’s structure and behavior using only minimal metadata.
2

Hydrate with Data

Populate the generated scaffold with the actual data after AI generation is complete.
The intuition behind this approach is to allow the LLM to remain efficient, focused, and less prone to errors by minimizing its exposure to raw data. This significantly improves both the speed and reliability of the final output.

Example: Chart Generation in Quail

Let’s see how Quail uses this pattern to generate charts based on user queries and database results.

Traditional Approach

Typically, you’d prompt an LLM to generate the entire chart component at once:
Traditional approach (not used in Quail)
export async function generateChart(results: any[], userQuery: string) {
  const { chart } = await generateText({
    model: openai("gpt-4o"),
    system: "You are a data visualization expert.",
    prompt: `Generate a chart for User Query: "${userQuery}" 
    with data points: ${JSON.stringify(results)}. 
    Return JSX code for visualization.`,
  });

  return chart;
}
Problems with this approach:
  • High token usage: Sending large datasets to the LLM consumes many tokens
  • Increased likelihood of hallucinations: The LLM may misinterpret or misrepresent the data
  • Increased latency and cost: Processing large datasets takes more time and resources

Agentic Hydration Approach (Used in Quail)

Instead, Quail splits the process into two distinct steps:

Step 1: Chart Configuration Generation

First, generate only the metadata needed for visualization decisions:
Step 1: Generate configuration with minimal data
import { generateObject } from "ai";
import { z } from "zod";

// Define a schema to constrain and structure the LLM's output
export const configSchema = z.object({
  type: z.enum(["bar", "line", "area", "pie"]).describe("Type of chart"),
  xKey: z.string().describe("Key for x-axis"),
  yKeys: z.array(z.string()).describe("Keys for y-axis"),
  description: z.string().describe("Brief description of the chart"),
  title: z.string().describe("Title of the chart"),
});

export async function generateChartConfig(results: any[], userQuery: string) {
  // Note: We only send a SAMPLE of the data, not the entire dataset
  const { object: config } = await generateObject({
    model: openai("gpt-4o"),
    system: "You are a data visualization expert.",
    prompt: `Generate minimal chart config for "${userQuery}" 
      with columns: ${JSON.stringify(results[0])}`,
    schema: configSchema,
  });

  return config;
}
Notice how we only send results[0] (a single row) to the LLM, not the entire dataset. This dramatically reduces token usage.

Step 2: Chart Rendering with Data Hydration

Then, use the configuration to render the chart with the complete dataset:
Step 2: Hydrate the scaffold with full data
import { LineChart, Line, XAxis, YAxis, Tooltip } from "recharts";
import { generateChartConfig } from "./actions";

export default async function DynamicChart({ results, userQuery }) {
  // Get the chart configuration with minimal token usage
  const config = await generateChartConfig(results, userQuery);

  // Hydrate the chart by injecting the full dataset
  return (
    <div className="chart-container">
      <h2>{config.title}</h2>
      <h3>{config.description}</h3>
      <LineChart width={500} height={300} data={results}>
        <XAxis dataKey={config.xKey} />
        <YAxis />
        <Tooltip />
        {config.yKeys.map((key, index) => (
          <Line
            key={key}
            dataKey={key}
            stroke={`hsl(var(--chart-${index + 1}))`}
          />
        ))}
      </LineChart>
    </div>
  );
}
The full dataset is injected directly into the React component, bypassing the LLM entirely. This ensures data accuracy and eliminates hallucinations.

Generalized Workflow

Here’s how to implement the Agentic Hydration pattern in your own applications:
1

Identify Minimal Metadata

Determine the smallest subset of metadata required for the LLM to accurately generate a UI scaffold. For charts, this might be column names and data types. For forms, field names and validation rules.
2

Generate Scaffold

Use the metadata in a prompt to instruct the LLM to create the initial structure of your component. Use structured output (like Zod schemas) to constrain the LLM’s response.
3

Retrieve Data

Independently query or retrieve the specific data needed for your component. This happens outside of the LLM context.
4

Hydrate and Render UI

Merge the scaffold with your data and render the component. The LLM never sees the full dataset.

Benefits & Trade-offs

Benefits

We found that implementing the Agentic Hydration pattern provided several key advantages:

Reduced Token Usage

Token usage no longer scales linearly with dataset size. It remains relatively constant regardless of data volume.

Hallucination-Free Data

Injecting data after LLM generation ensures data accuracy and eliminates data-related hallucinations.

Decreased Latency

Faster generation times improve responsiveness and user experience.

Considerations

Despite these benefits, Agentic Hydration isn’t a catch-all solution:
Context Requirements: Some scenarios inherently require the model to have full data context upfront, limiting the effectiveness of this approach.
Increased Complexity: Separating generation and hydration adds extra steps to the development workflow, introducing a trade-off between implementation complexity and efficiency gains.
For scenarios that deal with relatively small token inputs or outputs, this pattern may not be worth the added complexity.

Use Cases in Quail

The Agentic Hydration pattern is used throughout Quail:

Chart Generation

Generate chart configurations from query results without sending full datasets to the LLM

Dashboard Layouts

Create dashboard layouts based on widget metadata, then hydrate with real data

SQL Query Assistance

Generate query structures based on schema metadata, not table contents

Natural Language Responses

Structure responses based on query intent, then inject actual results

Implementation Tips

Always use structured output formats (like Zod schemas with Vercel AI SDK) to constrain the LLM’s response. This makes it easier to validate and use the generated scaffold.
import { z } from "zod";
import { generateObject } from "ai";

const schema = z.object({
  field: z.string(),
  // ... other fields
});

const result = await generateObject({
  model: yourModel,
  schema,
  prompt: yourPrompt,
});
Only send the absolute minimum data needed for the LLM to make decisions. For database results, send column names and types, not row data. For schemas, send table names and relationships, not full table contents.
Always validate the generated scaffold before hydration. Use TypeScript types and runtime validation to ensure the scaffold is valid.
If the same query pattern is used repeatedly, consider caching the generated scaffolds to avoid redundant LLM calls.

Performance Metrics

In Quail’s production environment, the Agentic Hydration pattern achieved:
MetricTraditional ApproachAgentic HydrationImprovement
Avg. Token Usage~3,500 tokens~150 tokens95% reduction
Avg. Response Time4.2 seconds0.8 seconds81% faster
Data Accuracy87%100%Perfect accuracy
Cost per Query$0.012$0.000596% savings
These metrics are based on real production data from Quail with average dataset sizes of 1,000 rows.

Conclusion

Agentic Hydration significantly improved how we build AI-generated UI components at Quail, enhancing our speed, accuracy, and efficiency. We encourage other developers to adapt and experiment with this pattern in their own LLM-based applications.

Learn More

Explore how Quail integrates AI throughout the platform

Build docs developers (and LLMs) love