The RAG-Powered Voice Agent: How Retell AI Elevates Knowledge Retrieval

Picture this: A client hands over 2,847 PDFs product manuals, compliance docs, service policies, the works. Their call centre staff are drowning.

Every day, Human agents waste time searching for the right file, asking colleagues for help, decoding legal jargon, then promising to call back... after they’ve figured it out. Hours wasted per day, that you as a business owner are paying for.

That’s not customer service. That’s delay.

At Waboom.ai, we use Retell AI’s Knowledge Base with RAG (Retrieval-Augmented Generation) to build voice agents that do the heavy lifting—instantly. . No delays. No confusion.

Smarter agents, faster resolutions, and happier teams who don’t need to dig through documents just to help a customer.

What is RAG (Retrieval Augmented Generation)?

RAG combines the best of both worlds: the vast knowledge of stored documents with the conversational abilities of large language models. Instead of trying to cram everything into the LLM's context window, RAG:

  1. Retrieves relevant information from your knowledge base

  2. Augments the LLM prompt with this specific context

  3. Generates responses based on both the conversation and retrieved knowledge

It's like having a research assistant who instantly finds the exact information your AI agent needs to answer any question.


The Retrieval Process

During conversations, Retell automatically:

  1. Analyzes the conversation context (not just the current question)

  2. Searches the vector database for relevant chunks

  3. Injects retrieved content under "## Related Knowledge Base Contexts"

  4. Generates contextually aware responses

The beauty? No prompt engineering required. It just works.

Handling Massive Document Collections

We've successfully deployed agents with knowledge bases containing:

  • 3,000+ PDF documents for a manufacturing client

  • 500+ policy documents for an insurance company

  • 1,200+ product manuals for a tech support operation

Before Knowledge Base: "Sorry, I don't have that information. Let me transfer you."

After Knowledge Base: "According to your warranty policy, that's covered under section 4.2. Here's exactly what we can do..."

Real-Time Information Updates

With auto-refreshing enabled, knowledge bases update every 24 hours:
Your agents always have the latest information without manual updates.

Working Around Retell AI’s Knowledge Base Limits

Retell AI’s Knowledge Base is powerful, but like any system, it has limits:

  • 500 URLs maximum

  • 25 files (50MB each)

  • 50 text snippets

For teams with thousands of documents or frequently updated content, this can be a blocker. But we’ve solved this at scale—without sacrificing performance.

How We Solve It

Retell’s documentation offers a clear workaround:

“You can create multiple knowledge bases to overcome these limits. An agent can have more than one knowledge base linked to it.”

We take that further by designing a structured strategy:

Smart Use Cases for Multiple Knowledge Bases

  • Content segmentation
    Split by department (e.g. sales, support, compliance) or function (e.g. installation, returns, FAQs).

  • File type organisation
    Separate web content, PDFs, and documentation into grouped KBs.

  • Update frequency
    Keep static files in one KB, and high-change content (like pricing sheets or promo terms) in another.

  • Access control
    Create different knowledge bases based on security or sensitivity.

Our Implementation Strategy

  • Build multiple themed knowledge bases around how your business actually works

  • Link all relevant knowledge bases to a single agent

  • During live calls, the agent queries across all linked knowledge bases automatically

  • No need to change your prompts—the retrieval layer handles it behind the scenes

1. Document Structure Optimisation

# GOOD: Clear, specific structure

## Refund Policy - Electronics

**Timeframe:** 30 days from purchase

**Condition:** Original packaging required

**Process:** Contact support at 1-800-REFUNDS


2. Chunking Strategy

Group related information together:

## iPhone 15 Pro Troubleshooting

**Issue:** Screen won't turn on

**Causes:** Dead battery, hardware failure, software crash

**Solutions:**

1. Charge for 30 minutes

2. Force restart (Volume Up + Volume Down + Power)

3. Contact Apple Support if issue persists

IMPORTANT Latency Optimisation

Retell's knowledge base retrieval adds ~100ms latency. To minimise impact:

  • Limit knowledge bases per agent (use only essential ones)

  • Optimise document length (shorter chunks retrieve faster)

  • Use specific, targeted content (avoid generic information)

Manufacturing Client: 3,000 PDFs Across 30 Years of Operations

The Problem:
Their technical support team was drowning in documents. With over 3,000 equipment manuals spanning three decades, agents often paused calls to dig through folders, ask colleagues, or delay responses entirely. Time was lost. Customers got frustrated.

The Solution:
We organised the documents into six specialist knowledge bases, each mapped to specific equipment categories. These were linked to a single Retell voice agent trained to retrieve the right answer in real time using RAG (Retrieval-Augmented Generation).

The Result:

  • 89% reduction in “I need to look that up” responses

  • Voice and chat agents could resolve complex support questions on the spot

  • No retraining required—just cleaner, faster, smarter calls

The Titan AI Advantage: Not Just for Customers

The same power we give your customers—we give your team.

With our Titan AI dashboard, internal staff can:

  • Search across all your business knowledge using a ChatGPT-style interface

  • Instantly retrieve policy documents, technical specs, manuals, and internal playbooks

  • Export AI-generated answers, emails, or reports—on brand, on message, and fact-checked from your own source material

  • Stop asking “Where’s that PDF?” and start acting faster

It’s private, secure, and built around your actual knowledge base

Why This Works

We use a hybrid approach:

  • Knowledge Base: For deep, reliable access to complex policies, manuals, warranties, and documentation

  • APIs and Variable Calls: To fetch live customer-specific data (e.g. account status, delivery ETA, contract terms)

Together, this gives your agent a complete brain—static knowledge + live data. That’s the Titan AI model we’ve built for enterprise clients.

Not Buzzwords. Real Impact.

RAG (Retrieval-Augmented Generation) and vector databases aren’t flashy tech terms. They’re how we build agents that feel like seasoned staff—not chatbots guessing their way through calls.

We segment content, optimise structure, and train agents with multiple knowledge bases mapped to real business logic. That’s how we handle:

  • Regulatory questions

  • Technical troubleshooting

  • Policy compliance

  • Multi-brand support

  • High-volume customer ops

The future of voice AI isn’t about pretending to be smarter—it’s about having access to the right knowledge.

You already have the content. We turn it into answers.

Ready to transform your document chaos into real-time intelligence?

Previous
Previous

Intelligent Pathing =Intelligent Conversations

Next
Next

The Game-Changer: Agent-to-Agent Transfer with Full Context Preservation