From Documents to AI Agents in Minutes

Build powerful RAG agents without writing code. Connect your documents, configure your workflow, deploy in minutes.

Built for production stacks

Next.js 16
LangGraph
Qdrant
PostgreSQL
OpenAI
Clerk
P95 ~12s
Workflow execution
SSE Streaming
Real-time responses
Rate Limited
Pooled connections

Features

Everything you need to build AI agents

Ingest, orchestrate, and connect to your tools. Built for teams shipping real systems.

One-Click Ingestion

Drop any document - PDF, DOCX, images - and watch it become searchable instantly

No-Code Workflows

Build complex AI workflows with drag-and-drop. No coding required.

MCP Integration

First MCP-native RAG platform. Connect Claude and other AI tools seamlessly.

BYOK Model

Bring your own API keys. Control costs, keep your data private.

How It Works

1. Upload

Drop your documents - PDFs, DOCX, images, spreadsheets

2. Configure

Build your workflow with drag-and-drop. Add AI steps, filters, actions.

3. Deploy

Your AI agent is ready. Chat with it, embed it, or call via API.

Workflows

Pre-built templates to get started fast

Customize or build from scratch. Every workflow runs on LangGraph with full parallel execution support.

FAQ Bot

Auto-answer customer questions from your knowledge base

WebhookRAG SearchLLM ChatResponse

Support Agent

Triage tickets with context from docs and past conversations

TriggerQuery ExpandHybrid SearchAgentWebhook

Webhook Triage

Route incoming requests based on content classification

WebhookLLM StructuredSwitchHTTP Actions

Content QA

Validate content against your brand guidelines automatically

ScheduleRAG RetrieveLLM JudgeStoreNotify
Trigger
RAG
Web
LLM

Parallel execution built-in

Fan-out to multiple nodes simultaneously, then merge results. RAG search and web fetch run in parallel, reducing latency. Powered by LangGraph's Send API.

RAG Pipeline

RAG you can trust

Production-grade retrieval with the controls you need and the observability you deserve.

Quality

Hybrid + Rerank + Evaluation

  • Vector + BM25 hybrid search
  • Jina or LLM-based reranking
  • Configurable score thresholds
  • Built for evaluation workflows

Control

topK, threshold, configs

  • Tune topK and score threshold
  • Enable/disable query techniques
  • Per-collection pipeline config
  • A/B test retrieval strategies

Observability

Events, logs, health

  • Real-time SSE execution events
  • Per-node timing and outputs
  • Execution history and replay
  • Health checks and metrics

Pricing

Simple, transparent pricing

Start free. Scale as you grow. No hidden fees.

Free

For exploration and small projects

$0/month
  • 1 Collection
  • 50 Documents
  • 1 Workflow
  • 100 Executions/month

Starter

For individuals and small teams

$29/month
  • 3 Collections
  • 500 Documents
  • 5 Workflows
  • 1,000 Executions/month
  • 1 MCP Server
  • Webhook Triggers
Most Popular

Pro

For growing teams and production use

$79/month
  • 10 Collections
  • 2,000 Documents
  • 20 Workflows
  • 5,000 Executions/month
  • 5 MCP Servers
  • All Triggers
  • API Access
  • 3 Team Members

Team

For teams with advanced needs

$199/month
  • 25 Collections
  • 10,000 Documents
  • 50 Workflows
  • 25,000 Executions/month
  • 10 MCP Servers
  • All Triggers
  • API Access
  • 10 Team Members
  • Priority Support

Enterprise

For large organizations with custom needs

$499/month
  • Unlimited Collections
  • Unlimited Documents
  • Unlimited Workflows
  • Unlimited Executions
  • Unlimited MCP Servers
  • Custom Integrations
  • Dedicated Support
  • SLA Guarantee

All plans include a 14-day free trial. No credit card required to start.

Built with Trust & Transparency

We prioritize your security, control, and confidence in our platform

Packt Publishing AI Author

Built by an author with proven AI expertise

Your Data Stays Yours

BYOK model - bring your own keys for full control

MCP-Native RAG Platform

First RAG platform built for Model Context Protocol

Built on Open Standards

Leveraging LangGraph, OpenAI, and modern tooling

FAQ

Frequently asked questions

Everything you need to know about IgnitionAI.

IgnitionAI supports PDF, DOCX, PPTX, CSV, Excel (XLSX), JSON, JSONL, Parquet files, HuggingFace datasets, individual web pages, and full website crawls. We handle parsing, chunking, and embedding automatically.

Hybrid search combines vector embeddings (semantic similarity) with BM25 keyword matching. You can tune the alpha parameter to blend both approaches, or use Reciprocal Rank Fusion (RRF) for result merging. This gives you the best of both worlds: semantic understanding and exact keyword matching.

Yes! We currently support OpenAI (GPT-4, GPT-4o, GPT-4o-mini) and Anthropic (Claude 3.5 Sonnet, Claude 3 Opus). Enterprise plans can request custom model integrations.

Absolutely. All data is encrypted at rest and in transit. We use organization-scoped multi-tenancy, meaning your data is completely isolated from other customers. Authentication is handled by Clerk with JWT verification on every request.

Workflows can be triggered manually, via webhook (HTTP POST), or on a schedule (cron expressions with timezone support). Starter plans have manual + webhook, Pro and Enterprise get scheduled triggers.

Our workflow engine (built on LangGraph) supports fan-out/fan-in patterns. You can execute multiple branches in parallel (e.g., RAG search + web search simultaneously), then merge results before continuing. This significantly reduces latency for complex pipelines.

Not currently. We're a managed SaaS platform. However, Enterprise customers can discuss dedicated infrastructure options. Contact sales for more information.

Free plans get community support via GitHub. Starter plans include email support with 48h response time. Pro plans get priority email support with 24h response. Enterprise includes dedicated support with SLA guarantees.

Ready to build your AI agent?

Start free. No credit card required.

Join developers building the next generation of AI agents

Assistant IgnitionAI

20 messages restants

Hey ! Je suis là pour t'aider.

Pose-moi tes questions sur IgnitionAI, le pricing, ou comment ça marche.