Introduction
Evidence infrastructure for AI decisions. Cryptographic proof of human oversight for EU AI Act, LGPD, SOC 2.
What XASE Does
| Capability | Description |
|---|---|
| Decision Capture | Record every AI decision with input, output, and model metadata |
| Human-in-the-Loop | Immutable proof of human review, approval, or override |
| Explainability | Automatic SHAP/LIME explanations stored with each decision |
| Model Registry | Track which model version made each decision |
| Evidence Export | Generate offline-verifiable bundles for audits |
What XASE is NOT
✗A workflow tool (we record actions, not manage them)
✗A model training platform (we don't touch your ML)
✗A monitoring tool (we don't track model drift)
✗Blockchain (we use hash chains, much simpler)
How It Works
Your AI System XASE Auditor │ │ │ ├── Decision made ──────────► │ │ │ Record + Hash │ │ │ + Sign │ │ │ │ ├── Human reviews ──────────► │ │ │ HITL + Identity │ │ │ + Timestamp │ │ │ │ │ │ ◄────── Request audit │ │ │ │ ├── Export Bundle ───────► │ │ │ │ │ [Verify offline: ✓ Valid]
XASE bundles are verifiable offline. Auditors don't need to call our API or trust our infrastructure.
Quick Example
example.pypython
import xase
client = xase.Client(api_key="xase_pk_...")
# 1. Record AI decision
record = client.records.create(
model_id="credit-model-v1",
input={"customer_id": "cust_123", "income": 85000},
output={"decision": "APPROVED", "limit": 25000}
)
# 2. Record human intervention
intervention = client.records.intervene(
record_id=record.id,
actor_email="analyst@company.com",
action="APPROVED",
reason="Documentation verified"
)
# 3. Export for audit
bundle = client.exports.create(record_id=record.id)
bundle.download("./evidence.zip")Next Steps
Quickstart
Get running in 5 minutes
Core Concepts
Understand the architecture
Python SDK
Full installation guide
Was this helpful?
Edit on GitHub