VERIFICATION-AS-A-SERVICE

The Distiller

Catch AI hallucinations before your users do. One API call. Sub-10-second verification. 90%+ F1 score.

88.7%
Catch Rate
92.2%
Precision
90.4%
F1 Score
7.4s
Avg Latency
Starter
$49/mo
$0.049 per verification
  • 1,000 verifications/month
  • REST API access
  • Claim-level verdicts (TRUE / FALSE / UNVERIFIABLE)
  • Source citations included
  • 7.4s average response time
  • Community support
Start Free Trial
Enterprise
Custom
Volume pricing from $0.008/call
  • 50,000+ verifications/month
  • Everything in Professional
  • Dedicated infrastructure
  • Custom model routing
  • Primary source priority (.gov, .edu)
  • SOC 2 compliance + BAA available
  • Dedicated account manager
Contact Sales

Real Unit Economics

We believe in transparent pricing. Here's what each verification actually costs us.

Cost Per Verification
$0.014
GPT-4o audit ($0.008) + Brave search ($0.006) + GPT-4o-mini triage ($0.0001)
Margin at Professional Tier
30%
$0.020 price − $0.014 cost = $0.006 margin per call
Break-Even
714 calls/mo
At Starter pricing ($0.049/call), break-even on infrastructure at 714 monthly verifications
Enterprise at Scale
43% margin
At $0.008/call with Gemini stack ($0.0096 → optimized to $0.0046), 50K calls = $2,300/mo profit

Built For Teams That Can't Afford Hallucinations

⚖️

Legal Tech

Verify AI-generated case summaries, contract clauses, and regulatory citations before they reach a courtroom.

🏥

Healthcare AI

Catch hallucinated drug interactions, dosage claims, and clinical guidelines in patient-facing AI tools.

📰

Journalism & Media

Fact-check AI-assisted articles against live web sources. Flag unverifiable claims before publication.

🏛️

Government & Policy

Verify AI-generated briefings, data citations, and policy summaries against authoritative .gov sources.

🎓

Education

Audit AI tutor responses and generated study materials for factual accuracy before student delivery.

💰

Financial Services

Verify AI-generated market analysis, earnings data, and regulatory filings against real-time sources.

Benchmarked, Not Hypothetical

100 questions from OpenAI's SimpleQA dataset. Real API calls. Real results.

📊 Confusion Matrix

Pipeline: FLAGGED Pipeline: PASSED
AI Wrong 🎯 47 caught 💀 6 missed
AI Correct 🚨 4 false alarms ✅ 43 clean
V1 Results (with Negative Evidence + Numerical Hard-Match fixes)

🔬 Example Catch

GPT-4o said:

"The Infinity disco burned down in 1982, founded by Steve Rubell at 254 W 54th St."

Distiller found:

Fire was 1979. Founded by Maurice Brahms. Located at 683 Broadway.

Trust Score: 33% — 3 claims false, 3 true, 3 unverifiable

How It Works

One API call. Four stages. Verifiable citations for every claim.

1. Triage
Extract Atomic Claims
AI decomposes any text into independently verifiable factual statements
2. Evidence
Search Grounding
Each claim is verified against live web evidence via Brave's LLM Context API (parallelized)
3. Audit
Claim-by-Claim Verdict
AI auditor compares each claim against evidence: TRUE, FALSE, or UNVERIFIABLE with citations
4. Report
Trust Score + Corrections
Aggregate trust score, per-claim verdicts, corrections for false claims, source citations