Developer API

Ship AI safely or don't ship it at all.

Zorelan is the verification layer that decides whether AI output is safe to execute — before it reaches users or systems.

In one API call, Zorelan compares multiple model outputs, scores trust, assesses risk, and returns a hard decision: allow, review, or block. Your application gates on result.decision — not on raw model output.

Trust → Risk → Decision → Execution

Show answer

High trust, acceptable risk, and clean alignment across providers.

Show with warning

Moderate trust or elevated uncertainty where the answer is useful but should not be presented as hard certainty.

Block or escalate

Low trust, high risk, or material disagreement where your product should fall back, ask for review, or avoid acting automatically.

Where Zorelan sits

Zorelan runs after model generation and before execution. It takes model outputs, evaluates agreement, risk, and context, and returns a deterministic decision your system can act on.

User Input → Models → Zorelan → Decision → Execution

Use it as the final checkpoint before your system acts on AI output.

Why this exists

Why not just use one model?

A single model can generate an answer, but it does not tell you whether that answer deserves confidence. Zorelan compares multiple model outputs and returns a structured verification signal you can use to show, warn, block, or escalate responses in production.

Single model

One answer, no cross-check, no disagreement signal, and no reliable way to decide whether the output should drive product behaviour.

Zorelan

Multiple model outputs compared through semantic agreement analysis, with arbitration when disagreement matters.

Result

A trust-aware output your application can actually use: answer, score, risk, disagreement, and recommended action.

Use this in production

Zorelan is built for AI products that need more than a raw model answer. Instead of trusting a single output, you get a decision signal your application can act on.

• Verify answers before showing them in your UI
• Gate workflows on result.decision — allow, review, or block
• Surface uncertainty instead of hiding it
• Reduce single-model failure risk in production
node.js · gate behaviour
import { Zorelan } from "@zorelan/sdk";

const zorelan = new Zorelan(process.env.ZORELAN_API_KEY!);

const result = await zorelan.verify(userInput);

// Gate execution on result.decision — the authoritative field
if (result.decision === "allow") {
  showAnswer(result.verified_answer);
} else if (result.decision === "review") {
  showWithWarning(result.verified_answer, result.decision_reason);
} else {
  // "block" — high risk, material conflict, or unresolved conditions
  requireHumanReview(result.decision_reason);
}

Quickstart

Make your first call

The fastest path is a single HTTP call. Send one prompt, get back a verified answer plus the confidence signals needed to decide how your product should use it.

curl
curl -X POST https://zorelan.com/v1/decision \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "Should I use HTTPS for my web application?"}'
curl · advanced dual-prompt
curl -X POST https://zorelan.com/v1/decision \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "Determine whether HTTPS should be used for a production web application. Include security, SEO, and compliance considerations.",
    "raw_prompt": "Should I use HTTPS for my web application?",
    "cache_bypass": true
  }'
bash · sdk install
npm install @zorelan/sdk
node.js / typescript sdk
import { Zorelan } from "@zorelan/sdk";

const zorelan = new Zorelan(process.env.ZORELAN_API_KEY!);

const result = await zorelan.verify(
  "Should I use HTTPS for my web application?"
);

console.log(result.verified_answer);
console.log(result.trust_score.score);
console.log(result.risk_level);
console.log(result.recommended_action);
python
import requests
import os

response = requests.post(
    "https://zorelan.com/v1/decision",
    headers={
        "Authorization": f"Bearer {os.environ['ZORELAN_API_KEY']}",
        "Content-Type": "application/json",
    },
    json={
        "prompt": "Should I use HTTPS for my web application?",
    }
)

data = response.json()
print(data["verified_answer"])
print(data["trust_score"]["score"])
print(data["consensus"]["level"])
print(data["cached"])  # True if result was cached
Returned signals

What you get back

Verified answer

A synthesized final answer based on the strongest aligned provider outputs.

Trust score

A calibrated 0–100 signal that reflects agreement strength, disagreement severity, and domain risk.

Decision metadata

Risk level, consensus, disagreement type, recommended action, arbitration usage, provider diagnostics, and usage metadata.

Minimal response

The core fields most apps use

Most products do not need the full payload to get started. In many cases, these fields are enough to drive UI and routing decisions.

json · minimal useful response
{
  "ok": true,
  "verified_answer": "Yes — you should use HTTPS for your web application.",
  "decision": "allow",
  "decision_reason": "Low risk, high trust score, and consistent model agreement. Output is safe to act on.",
  "trust_score": {
    "score": 94,
    "label": "high",
    "reason": "The providers strongly agree on a low-risk best-practice conclusion."
  },
  "risk_level": "low",
  "consensus": {
    "level": "high",
    "models_aligned": 2
  },
  "recommended_action": "Use the shared conclusion as the answer."
}
Use cases

Where to use Zorelan

Validate AI before showing users

Verify responses before displaying them in your UI. Use trust score and risk level to decide whether to present an answer directly or expose uncertainty.

Gate actions on the decision field

Only trigger workflows, automations, notifications, or downstream decisions when result.decision === "allow". The decision field already encodes risk, disagreement, and trust.

Reduce hallucinations in production

Add a verification layer between your app and LLMs to reduce fabricated or weak answers in higher-risk contexts.

Compare model behaviour

Inspect agreement, disagreement type, and arbitration results to understand how providers respond to the same prompt.

Add explainability to AI features

Return confidence and disagreement metadata alongside the answer so your product can communicate uncertainty clearly.

Build trust-aware product logic

Use trust score, risk level, and cached status as inputs into your application state, routing, or review flows.


Authentication

Authentication

All API requests must include your API key as a Bearer token in the Authorization header.

http
Authorization: Bearer YOUR_API_KEY
Content-Type: application/json
⚠ Keep your API key secret. Do not expose it in client-side code or public repositories. If compromised, contact us to rotate your key.

API Reference

POST /v1/decision

POSThttps://zorelan.com/v1/decision

Submit a prompt for multi-model verification. Zorelan queries multiple AI providers, compares their responses, and returns a trust-calibrated result your application can act on.

Request modes

Simple mode and advanced mode

Most developers should start with simple mode. Advanced mode is useful when you want to optimize the provider-facing prompt without distorting trust calibration.

Simple mode

Send one prompt. Zorelan uses it for both provider execution and calibration. This is the fastest way to get started and matches the original API contract.

Advanced mode

Send both prompt and raw_prompt. Use prompt as the execution prompt sent to providers, and raw_prompt as the original human question for task detection, risk classification, and trust scoring.

Think of prompt as the execution prompt and raw_prompt as the original question used to keep confidence honest.

Request body

FieldTypeDescription
prompt requiredstringThe execution prompt sent to AI providers. Plain natural language or a structured prompt. Max 10,000 characters.
raw_promptstringOptional. The original human question used for task detection, risk classification, and trust calibration. When omitted, Zorelan uses prompt for both execution and calibration.
cache_bypassbooleanOptional. Set to true to force a fresh live verification, bypassing any cached result. Defaults to false.
json · simple request
{
  "prompt": "Should I use HTTPS for my web application?"
}
json · advanced request
{
  "prompt": "Determine whether HTTPS should be used for a production web application. Include security, SEO, and compliance considerations.",
  "raw_prompt": "Should I use HTTPS for my web application?",
  "cache_bypass": true
}

Full response

All responses are JSON. A successful call returns ok: true with the full verification payload.

json · full response
{
  "ok": true,
  "decision": "allow",
  "decision_reason": "Low risk, high trust score, and consistent model agreement. Output is safe to act on.",
  "verified_answer": "Yes — you should use HTTPS for your web application. The providers agree that HTTPS is standard practice for protecting user data, securing sessions, and establishing trust.",
  "verdict": "Models are aligned on the main conclusion",
  "consensus": {
    "level": "high",
    "models_aligned": 2
  },
  "trust_score": {
    "score": 94,
    "label": "high",
    "reason": "The original answers support the same main conclusion, Models strongly agree on the core conclusion, provider output quality is strong, with no meaningful disagreement; overall risk is low."
  },
  "risk_level": "low",
  "confidence": "high",
  "confidence_reason": "Both models reached the same core conclusion with no meaningful disagreement.",
  "key_disagreement": "No meaningful disagreement",
  "recommended_action": "Use the shared conclusion as the answer.",
  "cached": false,
  "providers_used": ["anthropic", "perplexity"],
  "verification": {
    "final_conclusion_aligned": true,
    "disagreement_type": "none",
    "semantic_label": "HIGH_AGREEMENT",
    "semantic_rationale": "Both answers strongly recommend HTTPS as standard practice for security and trust.",
    "semantic_judge_model": "openai/gpt-4o-mini",
    "semantic_used_fallback": false
  },
  "arbitration": {
    "used": false,
    "provider": null,
    "winning_pair": ["anthropic", "perplexity"],
    "pair_strengths": null
  },
  "model_diagnostics": {
    "anthropic": { "quality_score": 9, "duration_ms": 4571, "timed_out": false, "used_fallback": false },
    "perplexity": { "quality_score": 8, "duration_ms": 5158, "timed_out": false, "used_fallback": false }
  },
  "meta": {
    "task_type": "general",
    "overlap_ratio": 0.42,
    "agreement_summary": "The two model outputs support the same main conclusion.",
    "prompt_chars": 42,
    "execution_prompt_chars": 118,
    "likely_conflict": false,
    "disagreement_type": "none",
    "initial_pair": ["anthropic", "perplexity"]
  },
  "usage": {
    "plan": "pro",
    "callsLimit": 1000,
    "callsUsed": 42,
    "callsRemaining": 958,
    "status": "active"
  }
}

Response fields

FieldTypeDescription
decisionstring"allow" · "review" · "block" — the authoritative execution gate. Derived from risk level, disagreement type, model alignment, and trust score. Use this field to drive product logic; do not re-implement the gate from trust_score alone.
decision_reasonstringPlain English explanation of why this decision was reached.
verified_answerstringThe synthesized final answer combining the best insights from the active provider pair.
verdictstringA concise decision verdict describing the overall result.
consensus.levelstring"high" · "medium" · "low" — how strongly the models agreed.
consensus.models_alignednumberNumber of models that reached the same conclusion.
trust_score.scorenumberOverall reliability score from 0–100. Calibrated from consensus, disagreement severity, and risk.
trust_score.labelstring"high" (≥75) · "moderate" (≥55) · "low" (<55)
trust_score.reasonstringPlain English explanation of why the score is what it is.
risk_levelstring"low" · "moderate" · "high" — assessed risk of acting on this answer.
key_disagreementstringThe main tension, tradeoff, or difference between the model responses.
recommended_actionstringPractical guidance on how to use this answer.
cachedbooleanfalse on a fresh live verification. true when the result was served from cache — meaning this calibrated prompt path was verified within the last 6 hours and the stored result is being returned. Use cache_bypass: true to force a fresh verification.
providers_usedstring[]The AI providers queried for this request.
verification.disagreement_typestringStructured classification of how models differed. See disagreement types below.
verification.semantic_judge_modelstringWhich model performed the neutral semantic judgment.
arbitration.usedbooleanWhether a third model was invoked to resolve disagreement.
model_diagnosticsobjectPer-provider quality scores, latency, and timeout status.
meta.task_typestring"technical" · "strategy" · "creative" · "general" — detected category of the calibrated prompt.
meta.prompt_charsnumberCharacter count of the calibrated prompt path. When raw_prompt is provided, this reflects raw_prompt.
meta.execution_prompt_charsnumberCharacter count of the execution prompt sent to providers. Present when execution and calibration prompts differ.
usageobjectYour current plan, call limits, and remaining calls for the billing period.

Decision layer

Gate execution on result.decision

Every response includes a decision field that encodes the execution gate. Zorelan derives it from risk level, disagreement type, model alignment, and trust score — you do not need to re-implement this logic yourself. Branch on decision directly.

"allow"

Low risk, no material conflict, and trust score above threshold. Safe to act on automatically.

"review"

Moderate risk, conditional alignment, or trust score below threshold. Route to human review before acting.

"block"

High risk, material conflict, or a security-domain prompt. Do not act on this output without resolution.


How trust works

How trust scoring works

Zorelan does not just measure whether models agree. It measures whether that agreement deserves confidence.

Consensus

How closely the providers align in conclusion and reasoning. High consensus means the models broadly support the same answer. Low consensus means they materially diverge.

Risk level

Whether the prompt belongs to a domain where certainty is naturally limited. Factual questions tend to be lower risk. Strategic, comparative, and speculative prompts are often inherently more uncertain.

Trust score

The final calibrated confidence signal. It combines agreement strength, disagreement severity, and risk level to produce a score from 0–100.

High agreement in an uncertain domain is not treated as ground truth.
PromptConsensusRiskTrust scoreInterpretation
Is water made of hydrogen and oxygen?HighLow94–95Objective fact with strong provider alignment.
Should I use TypeScript or JavaScript for a new project?HighModerate~85–88Strong aligned reasoning, but still a context-dependent tradeoff.
Is cryptocurrency a good long-term investment?Mixed / boundedModerate to highLower / cappedEven aligned answers should not be presented as hard certainty.
Score rangeInterpretationHow to use it
90+High-confidence factual or near-factual verificationUsually safe to rely on directly in product logic.
~85Strong aligned reasoning in an uncertain or tradeoff-heavy domainUseful, but should still be treated as judgment rather than ground truth.
Below 85Material disagreement, ambiguity, or elevated uncertaintyReview before acting or expose uncertainty in the UI.

Why this matters. Most systems treat agreement as confidence. Zorelan separates the two. That makes the trust score more useful in production, especially for verification, decision support, and trust-aware downstream logic.

Two models can strongly agree and still receive a bounded score if the prompt itself is inherently uncertain. This is intentional: Zorelan is designed to avoid presenting aligned speculation as hard certainty.

When raw_prompt is provided, trust scoring is calibrated against the original human question, not just the optimized execution prompt sent to providers. This helps preserve honest confidence even when you use prompt engineering to improve answer quality.

Disagreement types

Zorelan classifies the relationship between model responses into five types. This gives structured signal beyond a simple agree/disagree binary.

TypeTrust impactDescription
noneNo penaltyModels reached the same conclusion with no meaningful difference.
additive_nuanceNo penaltyOne model added correct detail without changing the core conclusion.
explanation_variation−4 ptsSame conclusion, different framing, emphasis, or supporting reasoning.
conditional_alignment−12 ptsA usable answer exists only by adding context or conditions. Models did not cleanly agree.
material_conflict−20 ptsModels gave materially opposite recommendations or conclusions.

Arbitration

When the initial two models disagree, Zorelan automatically invokes a third model to find the strongest pair. The arbitration field in the response tells you whether it was used, which provider was the tiebreaker, and the pair strength scores.

arbitration logic
Initial pair: Claude + Perplexity → LOW agreement
    ↓
Arbitration triggered
    ↓
Third model (GPT) queried
    ↓
Three pairs evaluated:
  Claude + Perplexity  → strength: 0
  Claude + GPT         → strength: 3  ← winner
  Perplexity + GPT     → strength: 2
    ↓
Active pair: Claude + GPT
Trust score recalculated on winning pair
Arbitration calls an additional provider when needed. It does not consume extra calls from your monthly quota — each request counts as one call regardless of whether arbitration is triggered.

Overview

How it works

Zorelan sits between your application and AI providers. When you call the API, Zorelan routes your prompt to multiple models simultaneously, compares their outputs using a semantic agreement engine, and returns a verified answer alongside a structured analysis of how the models agreed or disagreed.

pipeline
Your prompt
    ↓
Adaptive provider selection
    ↓
Parallel model queries (Claude · Perplexity · GPT)
    ↓
Semantic agreement judge (neutral cross-model)
    ↓
Arbitration if disagreement detected
    ↓
Trust score + verified answer

The semantic judge is always a different model family from the providers being compared — Claude judges OpenAI outputs, OpenAI judges Claude outputs. This eliminates self-scoring bias from the verification layer.

Execution vs Calibration

Prompt optimization without distorting trust

Zorelan supports both a provider-facing execution prompt and an original raw prompt for calibration. This allows you to optimize model performance without inflating confidence on inherently uncertain questions.

prompt

The execution prompt sent to providers. Use this when you want to structure or optimize how the models answer.

raw_prompt

The original human question used for task detection, risk classification, and trust scoring. Use this when prompt engineering would otherwise distort confidence.

dual-prompt model
raw_prompt
    ↓
Task detection + risk classification + trust calibration

prompt
    ↓
Provider execution + synthesis

Result
    ↓
Better answers, honest trust scoring
If raw_prompt is omitted, Zorelan falls back to using prompt for both execution and calibration. This preserves backward compatibility with the original API contract.

Caching

Verified result caching

Zorelan caches verified results for 6 hours. The first request for a given calibrated prompt path runs the full verification pipeline — querying multiple AI providers, running the semantic agreement judge, and producing a trust score. Subsequent identical requests within the cache window return the stored verified result instantly.

A cached response is not an unverified response. It is a previously verified result being replayed. The full verification pipeline ran on the first request — the cache stores that output, not a shortcut around it.
RequestLatencycached field
First request (live verification)~12–20sfalse
Repeat request within 6 hours (cached)~1–2strue

Every response includes a cached field so your application always knows whether it received a fresh live verification or a recently verified cached result. Cache keys are scoped to the calibrated prompt path and provider pair. When raw_prompt is provided, caching is anchored to that trust-calibration input.

Bypassing the cache

To force a fresh live verification regardless of cache state, pass cache_bypass: true in the request body. This is useful when you need the most current provider outputs — for example, on time-sensitive prompts or after a known change in underlying facts.

json · cache bypass
{
  "prompt": "Should I use HTTPS for my web application?",
  "cache_bypass": true
}
⚠ Cache bypass requests count against your monthly quota and run the full pipeline — expect normal verification latency.

Errors

Error codes

Zorelan uses standard HTTP status codes. All error responses include ok: false and an error code string.

StatusError codeDescription
400missing_promptThe request body is missing the required "prompt" field.
400invalid_raw_promptThe optional "raw_prompt" field was provided but is not a string.
400prompt_too_largeThe prompt exceeds 10,000 characters.
401unauthorizedMissing or invalid API key.
403subscription_inactiveYour subscription is inactive. Check your billing at zorelan.com.
429rate_limit_exceededYou have used all calls for this billing period.
429too_many_requestsToo many requests in a short window. Includes a "retry_after" field in seconds.
500internal_errorAn unexpected server error. Retry with exponential backoff.
json · error response
{
  "ok": false,
  "error": "rate_limit_exceeded",
  "plan": "starter",
  "calls_limit": 200,
  "calls_used": 200,
  "calls_remaining": 0
}

Account

Rate limits

ScopeLimitWindow
Per API key10 requests10 seconds
Per IP address30 requests10 seconds
Monthly quotaPlan limitBilling period

When rate limited, the API returns HTTP 429 with a retry_after field indicating seconds to wait before retrying.


Feedback API

Submit feedback

If Zorelan returns an incorrect verdict, you can submit feedback programmatically. Feedback is stored and reviewed to improve the verification engine.

POSThttps://zorelan.com/api/feedback

Accepts any valid API key or master key. Requires the original prompt, the verdict Zorelan returned, the issue type, and your correct answer.

Request body

FieldTypeRequiredDescription
prompt requiredstringYesThe original prompt you submitted to /v1/decision.
verdict requiredstringYesThe verdict Zorelan returned.
issue requiredstringYesOne of: incorrect_verdict · wrong_agreement_level · missing_nuance · other
correct_answer requiredstringYesWhat the correct answer should have been.
request_idstringNoThe request ID from the original /v1/decision response, if available.
notesstringNoAny additional context about why the verdict was wrong.
curl · post feedback
curl -X POST https://zorelan.com/api/feedback \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "prompt": "Should I use HTTPS for my web application?",
    "verdict": "Models are aligned on the main conclusion",
    "issue": "incorrect_verdict",
    "correct_answer": "HTTPS should be used by default for production web applications.",
    "request_id": "req_abc123",
    "notes": "This should be treated as a low-risk best-practice question."
  }'
json · response
{
  "ok": true,
  "id": "42d9ba4d-cab3-4721-83cb-06ae40c74562",
  "message": "Feedback received. Thank you."
}

Retrieve feedback

GEThttps://zorelan.com/api/feedback

Returns all feedback records. Requires the master key — not available to regular API keys.

curl · get feedback
curl https://zorelan.com/api/feedback \
  -H "Authorization: Bearer YOUR_MASTER_KEY"

Access

Get your API key

Subscribe below to receive your API key instantly. All plans include full API access with the same response schema, trust scoring, and arbitration.

Starter

A$9/mo

200 calls / month

Pro

A$29/mo

1,000 calls / month

Scale

A$99/mo

5,000 calls / month

Subscribe to get your Zorelan API key and start using the live developer API.