Ship AI safely or don't ship it at all.
Zorelan is the verification layer that decides whether AI output is safe to execute — before it reaches users or systems.
In one API call, Zorelan compares multiple model outputs, scores trust, assesses risk, and returns a hard decision: allow, review, or block. Your application gates on result.decision — not on raw model output.
Trust → Risk → Decision → Execution
Show answer
High trust, acceptable risk, and clean alignment across providers.
Show with warning
Moderate trust or elevated uncertainty where the answer is useful but should not be presented as hard certainty.
Block or escalate
Low trust, high risk, or material disagreement where your product should fall back, ask for review, or avoid acting automatically.
Zorelan runs after model generation and before execution. It takes model outputs, evaluates agreement, risk, and context, and returns a deterministic decision your system can act on.
Use it as the final checkpoint before your system acts on AI output.
Why not just use one model?
A single model can generate an answer, but it does not tell you whether that answer deserves confidence. Zorelan compares multiple model outputs and returns a structured verification signal you can use to show, warn, block, or escalate responses in production.
Single model
One answer, no cross-check, no disagreement signal, and no reliable way to decide whether the output should drive product behaviour.
Zorelan
Multiple model outputs compared through semantic agreement analysis, with arbitration when disagreement matters.
Result
A trust-aware output your application can actually use: answer, score, risk, disagreement, and recommended action.
Use this in production
Zorelan is built for AI products that need more than a raw model answer. Instead of trusting a single output, you get a decision signal your application can act on.
result.decision — allow, review, or blockimport { Zorelan } from "@zorelan/sdk";
const zorelan = new Zorelan(process.env.ZORELAN_API_KEY!);
const result = await zorelan.verify(userInput);
// Gate execution on result.decision — the authoritative field
if (result.decision === "allow") {
showAnswer(result.verified_answer);
} else if (result.decision === "review") {
showWithWarning(result.verified_answer, result.decision_reason);
} else {
// "block" — high risk, material conflict, or unresolved conditions
requireHumanReview(result.decision_reason);
}Make your first call
The fastest path is a single HTTP call. Send one prompt, get back a verified answer plus the confidence signals needed to decide how your product should use it.
curl -X POST https://zorelan.com/v1/decision \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"prompt": "Should I use HTTPS for my web application?"}'curl -X POST https://zorelan.com/v1/decision \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "Determine whether HTTPS should be used for a production web application. Include security, SEO, and compliance considerations.",
"raw_prompt": "Should I use HTTPS for my web application?",
"cache_bypass": true
}'npm install @zorelan/sdk
import { Zorelan } from "@zorelan/sdk";
const zorelan = new Zorelan(process.env.ZORELAN_API_KEY!);
const result = await zorelan.verify(
"Should I use HTTPS for my web application?"
);
console.log(result.verified_answer);
console.log(result.trust_score.score);
console.log(result.risk_level);
console.log(result.recommended_action);import requests
import os
response = requests.post(
"https://zorelan.com/v1/decision",
headers={
"Authorization": f"Bearer {os.environ['ZORELAN_API_KEY']}",
"Content-Type": "application/json",
},
json={
"prompt": "Should I use HTTPS for my web application?",
}
)
data = response.json()
print(data["verified_answer"])
print(data["trust_score"]["score"])
print(data["consensus"]["level"])
print(data["cached"]) # True if result was cachedWhat you get back
Verified answer
A synthesized final answer based on the strongest aligned provider outputs.
Trust score
A calibrated 0–100 signal that reflects agreement strength, disagreement severity, and domain risk.
Decision metadata
Risk level, consensus, disagreement type, recommended action, arbitration usage, provider diagnostics, and usage metadata.
The core fields most apps use
Most products do not need the full payload to get started. In many cases, these fields are enough to drive UI and routing decisions.
{
"ok": true,
"verified_answer": "Yes — you should use HTTPS for your web application.",
"decision": "allow",
"decision_reason": "Low risk, high trust score, and consistent model agreement. Output is safe to act on.",
"trust_score": {
"score": 94,
"label": "high",
"reason": "The providers strongly agree on a low-risk best-practice conclusion."
},
"risk_level": "low",
"consensus": {
"level": "high",
"models_aligned": 2
},
"recommended_action": "Use the shared conclusion as the answer."
}Where to use Zorelan
Validate AI before showing users
Verify responses before displaying them in your UI. Use trust score and risk level to decide whether to present an answer directly or expose uncertainty.
Gate actions on the decision field
Only trigger workflows, automations, notifications, or downstream decisions when result.decision === "allow". The decision field already encodes risk, disagreement, and trust.
Reduce hallucinations in production
Add a verification layer between your app and LLMs to reduce fabricated or weak answers in higher-risk contexts.
Compare model behaviour
Inspect agreement, disagreement type, and arbitration results to understand how providers respond to the same prompt.
Add explainability to AI features
Return confidence and disagreement metadata alongside the answer so your product can communicate uncertainty clearly.
Build trust-aware product logic
Use trust score, risk level, and cached status as inputs into your application state, routing, or review flows.
Authentication
All API requests must include your API key as a Bearer token in the Authorization header.
Authorization: Bearer YOUR_API_KEY Content-Type: application/json
POST /v1/decision
Submit a prompt for multi-model verification. Zorelan queries multiple AI providers, compares their responses, and returns a trust-calibrated result your application can act on.
Simple mode and advanced mode
Most developers should start with simple mode. Advanced mode is useful when you want to optimize the provider-facing prompt without distorting trust calibration.
Simple mode
Send one prompt. Zorelan uses it for both provider execution and calibration. This is the fastest way to get started and matches the original API contract.
Advanced mode
Send both prompt and raw_prompt. Use prompt as the execution prompt sent to providers, and raw_prompt as the original human question for task detection, risk classification, and trust scoring.
prompt as the execution prompt and raw_prompt as the original question used to keep confidence honest.Request body
| Field | Type | Description |
|---|---|---|
| prompt required | string | The execution prompt sent to AI providers. Plain natural language or a structured prompt. Max 10,000 characters. |
| raw_prompt | string | Optional. The original human question used for task detection, risk classification, and trust calibration. When omitted, Zorelan uses prompt for both execution and calibration. |
| cache_bypass | boolean | Optional. Set to true to force a fresh live verification, bypassing any cached result. Defaults to false. |
{
"prompt": "Should I use HTTPS for my web application?"
}{
"prompt": "Determine whether HTTPS should be used for a production web application. Include security, SEO, and compliance considerations.",
"raw_prompt": "Should I use HTTPS for my web application?",
"cache_bypass": true
}Full response
All responses are JSON. A successful call returns ok: true with the full verification payload.
{
"ok": true,
"decision": "allow",
"decision_reason": "Low risk, high trust score, and consistent model agreement. Output is safe to act on.",
"verified_answer": "Yes — you should use HTTPS for your web application. The providers agree that HTTPS is standard practice for protecting user data, securing sessions, and establishing trust.",
"verdict": "Models are aligned on the main conclusion",
"consensus": {
"level": "high",
"models_aligned": 2
},
"trust_score": {
"score": 94,
"label": "high",
"reason": "The original answers support the same main conclusion, Models strongly agree on the core conclusion, provider output quality is strong, with no meaningful disagreement; overall risk is low."
},
"risk_level": "low",
"confidence": "high",
"confidence_reason": "Both models reached the same core conclusion with no meaningful disagreement.",
"key_disagreement": "No meaningful disagreement",
"recommended_action": "Use the shared conclusion as the answer.",
"cached": false,
"providers_used": ["anthropic", "perplexity"],
"verification": {
"final_conclusion_aligned": true,
"disagreement_type": "none",
"semantic_label": "HIGH_AGREEMENT",
"semantic_rationale": "Both answers strongly recommend HTTPS as standard practice for security and trust.",
"semantic_judge_model": "openai/gpt-4o-mini",
"semantic_used_fallback": false
},
"arbitration": {
"used": false,
"provider": null,
"winning_pair": ["anthropic", "perplexity"],
"pair_strengths": null
},
"model_diagnostics": {
"anthropic": { "quality_score": 9, "duration_ms": 4571, "timed_out": false, "used_fallback": false },
"perplexity": { "quality_score": 8, "duration_ms": 5158, "timed_out": false, "used_fallback": false }
},
"meta": {
"task_type": "general",
"overlap_ratio": 0.42,
"agreement_summary": "The two model outputs support the same main conclusion.",
"prompt_chars": 42,
"execution_prompt_chars": 118,
"likely_conflict": false,
"disagreement_type": "none",
"initial_pair": ["anthropic", "perplexity"]
},
"usage": {
"plan": "pro",
"callsLimit": 1000,
"callsUsed": 42,
"callsRemaining": 958,
"status": "active"
}
}Response fields
| Field | Type | Description |
|---|---|---|
| decision | string | "allow" · "review" · "block" — the authoritative execution gate. Derived from risk level, disagreement type, model alignment, and trust score. Use this field to drive product logic; do not re-implement the gate from trust_score alone. |
| decision_reason | string | Plain English explanation of why this decision was reached. |
| verified_answer | string | The synthesized final answer combining the best insights from the active provider pair. |
| verdict | string | A concise decision verdict describing the overall result. |
| consensus.level | string | "high" · "medium" · "low" — how strongly the models agreed. |
| consensus.models_aligned | number | Number of models that reached the same conclusion. |
| trust_score.score | number | Overall reliability score from 0–100. Calibrated from consensus, disagreement severity, and risk. |
| trust_score.label | string | "high" (≥75) · "moderate" (≥55) · "low" (<55) |
| trust_score.reason | string | Plain English explanation of why the score is what it is. |
| risk_level | string | "low" · "moderate" · "high" — assessed risk of acting on this answer. |
| key_disagreement | string | The main tension, tradeoff, or difference between the model responses. |
| recommended_action | string | Practical guidance on how to use this answer. |
| cached | boolean | false on a fresh live verification. true when the result was served from cache — meaning this calibrated prompt path was verified within the last 6 hours and the stored result is being returned. Use cache_bypass: true to force a fresh verification. |
| providers_used | string[] | The AI providers queried for this request. |
| verification.disagreement_type | string | Structured classification of how models differed. See disagreement types below. |
| verification.semantic_judge_model | string | Which model performed the neutral semantic judgment. |
| arbitration.used | boolean | Whether a third model was invoked to resolve disagreement. |
| model_diagnostics | object | Per-provider quality scores, latency, and timeout status. |
| meta.task_type | string | "technical" · "strategy" · "creative" · "general" — detected category of the calibrated prompt. |
| meta.prompt_chars | number | Character count of the calibrated prompt path. When raw_prompt is provided, this reflects raw_prompt. |
| meta.execution_prompt_chars | number | Character count of the execution prompt sent to providers. Present when execution and calibration prompts differ. |
| usage | object | Your current plan, call limits, and remaining calls for the billing period. |
Gate execution on result.decision
Every response includes a decision field that encodes the execution gate. Zorelan derives it from risk level, disagreement type, model alignment, and trust score — you do not need to re-implement this logic yourself. Branch on decision directly.
"allow"
Low risk, no material conflict, and trust score above threshold. Safe to act on automatically.
"review"
Moderate risk, conditional alignment, or trust score below threshold. Route to human review before acting.
"block"
High risk, material conflict, or a security-domain prompt. Do not act on this output without resolution.
How trust scoring works
Zorelan does not just measure whether models agree. It measures whether that agreement deserves confidence.
Consensus
How closely the providers align in conclusion and reasoning. High consensus means the models broadly support the same answer. Low consensus means they materially diverge.
Risk level
Whether the prompt belongs to a domain where certainty is naturally limited. Factual questions tend to be lower risk. Strategic, comparative, and speculative prompts are often inherently more uncertain.
Trust score
The final calibrated confidence signal. It combines agreement strength, disagreement severity, and risk level to produce a score from 0–100.
| Prompt | Consensus | Risk | Trust score | Interpretation |
|---|---|---|---|---|
| Is water made of hydrogen and oxygen? | High | Low | 94–95 | Objective fact with strong provider alignment. |
| Should I use TypeScript or JavaScript for a new project? | High | Moderate | ~85–88 | Strong aligned reasoning, but still a context-dependent tradeoff. |
| Is cryptocurrency a good long-term investment? | Mixed / bounded | Moderate to high | Lower / capped | Even aligned answers should not be presented as hard certainty. |
| Score range | Interpretation | How to use it |
|---|---|---|
| 90+ | High-confidence factual or near-factual verification | Usually safe to rely on directly in product logic. |
| ~85 | Strong aligned reasoning in an uncertain or tradeoff-heavy domain | Useful, but should still be treated as judgment rather than ground truth. |
| Below 85 | Material disagreement, ambiguity, or elevated uncertainty | Review before acting or expose uncertainty in the UI. |
Why this matters. Most systems treat agreement as confidence. Zorelan separates the two. That makes the trust score more useful in production, especially for verification, decision support, and trust-aware downstream logic.
Two models can strongly agree and still receive a bounded score if the prompt itself is inherently uncertain. This is intentional: Zorelan is designed to avoid presenting aligned speculation as hard certainty.
When raw_prompt is provided, trust scoring is calibrated against the original human question, not just the optimized execution prompt sent to providers. This helps preserve honest confidence even when you use prompt engineering to improve answer quality.
Disagreement types
Zorelan classifies the relationship between model responses into five types. This gives structured signal beyond a simple agree/disagree binary.
| Type | Trust impact | Description |
|---|---|---|
| none | No penalty | Models reached the same conclusion with no meaningful difference. |
| additive_nuance | No penalty | One model added correct detail without changing the core conclusion. |
| explanation_variation | −4 pts | Same conclusion, different framing, emphasis, or supporting reasoning. |
| conditional_alignment | −12 pts | A usable answer exists only by adding context or conditions. Models did not cleanly agree. |
| material_conflict | −20 pts | Models gave materially opposite recommendations or conclusions. |
Arbitration
When the initial two models disagree, Zorelan automatically invokes a third model to find the strongest pair. The arbitration field in the response tells you whether it was used, which provider was the tiebreaker, and the pair strength scores.
Initial pair: Claude + Perplexity → LOW agreement
↓
Arbitration triggered
↓
Third model (GPT) queried
↓
Three pairs evaluated:
Claude + Perplexity → strength: 0
Claude + GPT → strength: 3 ← winner
Perplexity + GPT → strength: 2
↓
Active pair: Claude + GPT
Trust score recalculated on winning pairHow it works
Zorelan sits between your application and AI providers. When you call the API, Zorelan routes your prompt to multiple models simultaneously, compares their outputs using a semantic agreement engine, and returns a verified answer alongside a structured analysis of how the models agreed or disagreed.
Your prompt
↓
Adaptive provider selection
↓
Parallel model queries (Claude · Perplexity · GPT)
↓
Semantic agreement judge (neutral cross-model)
↓
Arbitration if disagreement detected
↓
Trust score + verified answerThe semantic judge is always a different model family from the providers being compared — Claude judges OpenAI outputs, OpenAI judges Claude outputs. This eliminates self-scoring bias from the verification layer.
Prompt optimization without distorting trust
Zorelan supports both a provider-facing execution prompt and an original raw prompt for calibration. This allows you to optimize model performance without inflating confidence on inherently uncertain questions.
prompt
The execution prompt sent to providers. Use this when you want to structure or optimize how the models answer.
raw_prompt
The original human question used for task detection, risk classification, and trust scoring. Use this when prompt engineering would otherwise distort confidence.
raw_prompt
↓
Task detection + risk classification + trust calibration
prompt
↓
Provider execution + synthesis
Result
↓
Better answers, honest trust scoringraw_prompt is omitted, Zorelan falls back to using prompt for both execution and calibration. This preserves backward compatibility with the original API contract.Verified result caching
Zorelan caches verified results for 6 hours. The first request for a given calibrated prompt path runs the full verification pipeline — querying multiple AI providers, running the semantic agreement judge, and producing a trust score. Subsequent identical requests within the cache window return the stored verified result instantly.
| Request | Latency | cached field |
|---|---|---|
| First request (live verification) | ~12–20s | false |
| Repeat request within 6 hours (cached) | ~1–2s | true |
Every response includes a cached field so your application always knows whether it received a fresh live verification or a recently verified cached result. Cache keys are scoped to the calibrated prompt path and provider pair. When raw_prompt is provided, caching is anchored to that trust-calibration input.
Bypassing the cache
To force a fresh live verification regardless of cache state, pass cache_bypass: true in the request body. This is useful when you need the most current provider outputs — for example, on time-sensitive prompts or after a known change in underlying facts.
{
"prompt": "Should I use HTTPS for my web application?",
"cache_bypass": true
}Error codes
Zorelan uses standard HTTP status codes. All error responses include ok: false and an error code string.
| Status | Error code | Description |
|---|---|---|
| 400 | missing_prompt | The request body is missing the required "prompt" field. |
| 400 | invalid_raw_prompt | The optional "raw_prompt" field was provided but is not a string. |
| 400 | prompt_too_large | The prompt exceeds 10,000 characters. |
| 401 | unauthorized | Missing or invalid API key. |
| 403 | subscription_inactive | Your subscription is inactive. Check your billing at zorelan.com. |
| 429 | rate_limit_exceeded | You have used all calls for this billing period. |
| 429 | too_many_requests | Too many requests in a short window. Includes a "retry_after" field in seconds. |
| 500 | internal_error | An unexpected server error. Retry with exponential backoff. |
{
"ok": false,
"error": "rate_limit_exceeded",
"plan": "starter",
"calls_limit": 200,
"calls_used": 200,
"calls_remaining": 0
}Rate limits
| Scope | Limit | Window |
|---|---|---|
| Per API key | 10 requests | 10 seconds |
| Per IP address | 30 requests | 10 seconds |
| Monthly quota | Plan limit | Billing period |
When rate limited, the API returns HTTP 429 with a retry_after field indicating seconds to wait before retrying.
Submit feedback
If Zorelan returns an incorrect verdict, you can submit feedback programmatically. Feedback is stored and reviewed to improve the verification engine.
Accepts any valid API key or master key. Requires the original prompt, the verdict Zorelan returned, the issue type, and your correct answer.
Request body
| Field | Type | Required | Description |
|---|---|---|---|
| prompt required | string | Yes | The original prompt you submitted to /v1/decision. |
| verdict required | string | Yes | The verdict Zorelan returned. |
| issue required | string | Yes | One of: incorrect_verdict · wrong_agreement_level · missing_nuance · other |
| correct_answer required | string | Yes | What the correct answer should have been. |
| request_id | string | No | The request ID from the original /v1/decision response, if available. |
| notes | string | No | Any additional context about why the verdict was wrong. |
curl -X POST https://zorelan.com/api/feedback \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"prompt": "Should I use HTTPS for my web application?",
"verdict": "Models are aligned on the main conclusion",
"issue": "incorrect_verdict",
"correct_answer": "HTTPS should be used by default for production web applications.",
"request_id": "req_abc123",
"notes": "This should be treated as a low-risk best-practice question."
}'{
"ok": true,
"id": "42d9ba4d-cab3-4721-83cb-06ae40c74562",
"message": "Feedback received. Thank you."
}Retrieve feedback
Returns all feedback records. Requires the master key — not available to regular API keys.
curl https://zorelan.com/api/feedback \ -H "Authorization: Bearer YOUR_MASTER_KEY"
Get your API key
Subscribe below to receive your API key instantly. All plans include full API access with the same response schema, trust scoring, and arbitration.
Starter
A$9/mo
200 calls / month
Pro
A$29/mo
1,000 calls / month
Scale
A$99/mo
5,000 calls / month
Subscribe to get your Zorelan API key and start using the live developer API.