NEWEvidence Bundle 2.0 is out

The Evidence Layer
for AI Decisions.

Cryptographic proof of human oversight. Built for the auditors of the EU AI Act and LGPD.

terminal
12M+ decisions verified • $2.1B in protected transactions • 0 audit failures
TRUSTED BY REGULATED TEAMS

Hand auditors a ZIP, not database access.

The XASE Evidence Bundle is a self-contained proof package. Verify integrity without touching your systems.

evidence_txn_8a7f3b2c.zip
decision.jsonWhat the model decided
explanation.jsonWhy (SHAP values)
intervention.jsonWho reviewed, when
model_card.jsonModel version, hash
signature.sigRSA-256 signature
verify.shRun this. Trust nothing.
verify.sh
$ ./verify.sh

Checking signature... ✓ Valid (signed by KMS key arn:aws:kms:...)
Checking hash chain... ✓ Intact (block 847 of 12,847)
Checking timestamps... ✓ Consistent (decision: 14:31:02, intervention: 14:32:47)
Checking model registry... ✓ Hash matches credit-scoring-v4.2.1

RESULT: Bundle is authentic and unmodified.
"We gave the auditor a USB drive. They verified everything in their air-gapped lab. Audit closed in 4 hours." — Head of Compliance, Series C Fintech

Built for the regulations that matter.

XASE maps directly to regulatory requirements. No interpretation needed.

Requirement
Regulation
What Auditors Want
XASE Delivers
Human Oversight
EU AI Act Art. 14
Proof that humans reviewed high-risk decisions
Signed HITL records with identity, timestamp, reason
Right to Explanation
LGPD Art. 20, GDPR Art. 22
Why the algorithm decided
SHAP/LIME explanation in every bundle
Traceability
SOC 2, ISO 27001
Immutable audit trail
WORM ledger + hash chain
Model Documentation
EU AI Act Art. 11
Technical documentation of AI systems
Model cards with version, hash, performance metrics
Accountability
BACEN, BCB, Central Banks
Who is responsible for each decision
Actor identity cryptographically linked
This is technical mapping. Consult legal counsel for compliance certification.

Regulators don't accept "trust us."

The EU AI Act, LGPD, and Central Banks now require proof—not promises.

The Audit Gap

Screenshots and emails don't survive forensic examination.

The Explainability Void

"Why did the model reject?" → "It said so." → €35M fine.

The Timestamp Problem

Prove the human reviewed it. Cryptographically. Offline.

"If you can't prove the human was in the loop, the human wasn't in the loop."

One integration. Infinite audits.

XASE sits between your model and your regulator. We handle the proof.

app.ts
// 3 lines. That's the integration.
const record = await xase.capture({
  model: "credit-scoring-v4",
  input: { income: 85000, debt_ratio: 0.32 },
  output: { decision: "APPROVED", confidence: 0.94 },
  explain: "shap"  // Auto-generates explanation
});

// record.id → "rec_8a7f3b2c..."
// record.hash → "sha256:9f86d08..."
Your API → XASE SDK → Immutable Ledger → Hash Chain → KMS Signature
[Auditor verifies offline]
hitl.ts
// When a human overrides or approves
await xase.intervene({
  recordId: "rec_8a7f3b2c",
  actor: "joao.silva@company.com",
  action: "OVERRIDE",
  newOutcome: "APPROVED",
  reason: "Collateral documentation verified manually",
  evidence: ["doc_upload_id_1", "doc_upload_id_2"]
});

// Immutable. Signed. Identity-linked. Court-ready.
Immutable by Design
WORM storage. Hash chain links every record.
Explainable by Default
SHAP values auto-captured.
Verifiable Offline
Auditor runs verify.sh. No API calls.

Built for High-Stakes Autonomy.

Fintech

Credit denials & Fraud detection.
→ FCRA Compliance proof generated instantly.

Healthcare

Diagnostic triage & Claims processing.
→ Liability shield against malpractice claims.

Enterprise

HR Screening & Dynamic Pricing.
→ Defense against bias lawsuits (NYC 144).

Designed for compliance with:

EU AI ACT
GDPR
SOC 2
ISO 42001
"We believe that as AI moves from 'Chat' to 'Action', the barrier to adoption won't be intelligence—it will be trust. We are building the infrastructure that allows companies to delegate authority to machines without carrying unlimited liability."

The math is simple.

95%
Reduction in audit prep time
From 3 weeks to 4 hours
$0
Additional compliance headcount
Infrastructure, not people
0
Audit failures
Across all customers
Regulatory audit
3-6 weeks prep, $50-100K consultants → 4 hours, self-service export
Customer dispute
Manual evidence gathering → Bundle export, case closed
Model incident
"We think human X reviewed it" → Cryptographic proof, timestamp, reason
Due diligence
Scramble to document AI governance → Export full audit history instantly

APIs that don't suck.

Built by engineers who've integrated terrible compliance tools. We made the opposite.

SDKs: Python, Node.js, Go — Type-safe. Async-first.
Latency: <10ms p99 — Non-blocking. Won't slow your inference.
Idempotency: Built-in — Safe retries. No duplicate records.
Webhooks: Real-time — Alert on anomalies, high override rates.
full-flow.ts
import { Xase } from '@xase/sdk';

const xase = new Xase({ apiKey: process.env.XASE_API_KEY });

// 1. Register your model once
await xase.models.register({
  id: "credit-scoring-v4",
  hash: "sha256:9f86d081884c...",
  metrics: { accuracy: 0.94, auc_roc: 0.97 },
  intendedUse: "Consumer credit decisions under $100K"
});

// 2. Capture every decision (async, non-blocking)
const record = await xase.capture({
  model: "credit-scoring-v4",
  input: { customerId: "cust_123", income: 85000 },
  output: { decision: "APPROVED", limit: 25000 },
  explain: "shap",
  idempotencyKey: "req_abc123"
});

// 3. Record human intervention when it happens
await xase.intervene({
  recordId: record.id,
  actor: "analyst@company.com",
  action: "APPROVED",
  reason: "Manual document verification completed"
});

// 4. Export for audit (or let auditor do it)
const bundle = await xase.export({ recordId: record.id });
// Returns signed ZIP, verifiable offline

Transparent. Predictable. No "call us."

Developer

$0/month
  • 1,000 decisions/month
  • Basic explainability
  • Community support
  • Dashboard access

Scale

$499/month
  • 50,000 decisions/month
  • Full XAI suite (SHAP, LIME)
  • Priority support
  • Export API
  • Alerting & webhooks

Enterprise

Custom
  • Unlimited volume
  • On-prem / VPC deployment
  • SSO / SAML
  • Dedicated support + SLA
  • Custom integrations

Trust nothing. Verify everything.

Including us. That's why bundles are verifiable without calling our API.

Encryption at Rest
AES-256
Encryption in Transit
TLS 1.3
Hash Algorithm
SHA-256
Signature Algorithm
RSA-SHA256 via AWS KMS
Immutability
WORM + SQL triggers + hash chain
Access Control
RBAC + API key scopes
Audit Logging
Every action logged, immutable
Data Residency
AWS US, EU, or Brazil

Build AI that regulators respect.

Stop building internal audit tools. Use infrastructure designed for the age of AI accountability.

Questions? founders@xase.ai • We respond in <24h.
Docs