Documentation Index
Fetch the complete documentation index at: https://parmanasystems.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
The problem with existing approaches
Production systems that make consequential decisions — loan approvals, trade authorization, AI agent execution, release gating — eventually need more than logic. They need proof. Without it, an audit question like “what policy governed this $2M decision in March?” has no reliable answer.Raw if/else
Decision logic inline with application code is deterministic and fast, but it:- Leaves no trace of what version of the logic executed
- Cannot be independently verified after the fact
- Changes silently as code is deployed
- Provides no replay protection — the same inputs can be evaluated multiple times with different results if code changes between calls
Feature flags
Feature flag systems (LaunchDarkly, Unleash, etc.) control rollout, not governance. They:- Have no concept of a signed decision record
- Are probabilistic by design (percentage rollouts, context-based targeting)
- Offer no policy version pinning
- Do not produce independently verifiable audit artifacts
OPA / Rego
Open Policy Agent is a serious policy evaluation engine, but:- Policy evaluation produces a
true/false/allow/denyresult — not a signed attestation of what ran - There is no built-in mechanism to prove which policy version evaluated a specific input
- No replay protection; the same execution can be re-evaluated
- Verification requires access to OPA itself — not portable
Custom rule engines
Internal rule engines solve the policy-as-config problem but typically require significant bespoke investment to add audit trails, versioning, cryptographic proof, and independent verifiability.Comparison table
| Capability | Raw if/else | Feature flags | OPA / Rego | Custom engine | Parmana Systems |
|---|---|---|---|---|---|
| Deterministic decisions | ✓ | ✗ | ✓ | ✓ | ✓ |
| Policy-as-config (JSON/YAML) | ✗ | Partial | ✓ | Varies | ✓ |
| Signed execution attestation | ✗ | ✗ | ✗ | Rarely | ✓ |
| Independent verifiability | ✗ | ✗ | ✗ | Rarely | ✓ |
| Replay protection | ✗ | ✗ | ✗ | Rarely | ✓ |
| Policy version pinning per decision | ✗ | Partial | Partial | Varies | ✓ |
| Immutable audit lineage | ✗ | ✗ | ✗ | Rarely | ✓ |
| Override / escalation governance | ✗ | ✗ | ✗ | Custom | ✓ |
| Zero-dependency portable verification | ✗ | ✗ | ✗ | ✗ | ✓ |
Who is this for?
Fintech engineers
Building payment approval pipelines, credit decisioning systems, or trade authorization workflows where:- Every decision must carry a provenance trail for regulatory audit
- The same policy version must govern both the live decision and any future re-evaluation
- Decisions must be tamper-evident — a modified attestation is detectable
AI platform teams
Deploying LLM agents, autonomous pipelines, or AI-assisted automation where:- The model recommends an action — Parmana Systems decides whether the system is authorized to execute it
- Human-in-the-loop escalation must be enforced when risk thresholds are exceeded
- Every agent action must produce a cryptographic record for safety review
Compliance engineers
Responsible for demonstrating to regulators, auditors, or legal teams that:- A specific policy version governed a specific decision on a specific date
- The decision cannot have been modified after the fact
- Verification is portable — auditors can verify without access to internal systems
Parmana Systems is not a replacement for general-purpose policy engines used for access control (e.g., RBAC/ABAC via OPA). It is purpose-built for execution authority decisions that require a cryptographic audit trail.