Documentation Index
Fetch the complete documentation index at: https://parmanasystems.mintlify.app/llms.txt
Use this file to discover all available pages before exploring further.
The problem
LLM agents and AI-powered automation can recommend or request actions — deploy infrastructure, execute a financial transaction, delete data, send a communication. The model’s recommendation is probabilistic. The execution of that recommendation must be deterministic and governed.
Without an enforcement layer:
- There is no record of what authority permitted the action
- The same action can be triggered multiple times (no replay protection)
- A compromised or hallucinating model can escalate beyond its intended scope
- Auditors cannot prove after the fact that a human-defined policy permitted the action
Parmana Systems solves this by being the enforcement layer between “the model recommends X” and “the system executes X.”
Policy definition
This policy governs AI agent tool execution, from the ai-agent-tool-execution/v1 policy in the repository:
{
"policyId": "ai-agent-tool-execution",
"policyVersion": "v1",
"schemaVersion": "1.0.0",
"signalsSchema": {
"risk_score": { "type": "integer" },
"execution_amount": { "type": "integer" },
"human_approved": { "type": "boolean" },
"tool_category": { "type": "string" },
"requested_action": { "type": "string" },
"target_environment": { "type": "string" }
},
"rules": [
{
"id": "block_production_delete",
"condition": {
"all": [
{ "signal": "target_environment", "equals": "production" },
{ "signal": "requested_action", "equals": "delete" }
]
},
"outcome": {
"action": "reject",
"requires_override": true,
"reason": "production_delete_blocked"
}
},
{
"id": "high_value_financial_action",
"condition": {
"all": [
{ "signal": "tool_category", "equals": "financial" },
{ "signal": "execution_amount", "greater_than": 10000 },
{ "signal": "human_approved", "equals": false }
]
},
"outcome": {
"action": "reject",
"requires_override": true,
"reason": "human_approval_required"
}
},
{
"id": "low_risk_sandbox_execution",
"condition": {
"all": [
{ "signal": "risk_score", "less_than": 30 },
{ "signal": "target_environment", "equals": "sandbox" }
]
},
"outcome": {
"action": "approve",
"requires_override": false,
"reason": "low_risk_sandbox"
}
},
{
"id": "catch_all",
"condition": { "all": [] },
"outcome": {
"action": "reject",
"requires_override": true,
"reason": "catch_all"
}
}
]
}
Architecture
LLM / Agent Output
↓
Extract intent → { requested_action, target_environment, tool_category, ... }
↓
Compute risk_score (your scoring function)
↓
Parmana Systems executeFromSignals()
↓
execution_state:
"completed" → proceed with tool call
"blocked" → reject silently
"pending_override" → route to human approval queue
↓
ExecutionAttestation (signed)
↓
Audit log / compliance record
The model never calls tools directly. Every tool invocation requires a completed attestation from the governance layer.
Implementation
import {
executeFromSignals,
LocalSigner,
LocalVerifier,
RedisReplayStore,
} from "@parmanasystems/core";
interface AgentAction {
requestedAction: string;
targetEnvironment: string;
toolCategory: string;
executionAmount: number;
humanApproved: boolean;
riskScore: number;
}
async function authorizeAgentAction(
action: AgentAction,
signer: LocalSigner,
verifier: LocalVerifier,
replayStore: RedisReplayStore
) {
const attestation = await executeFromSignals(
{
policyId: "ai-agent-tool-execution",
policyVersion: "v1",
signals: {
requested_action: action.requestedAction,
target_environment: action.targetEnvironment,
tool_category: action.toolCategory,
execution_amount: action.executionAmount,
human_approved: action.humanApproved,
risk_score: action.riskScore,
},
},
signer,
verifier,
replayStore
);
if (attestation.execution_state === "completed") {
return { authorized: true, attestation };
}
if (attestation.execution_state === "pending_override") {
// Route to human review queue with the attestation ID
await enqueueForHumanReview(attestation);
return { authorized: false, pendingReview: true, attestation };
}
// blocked
return { authorized: false, attestation };
}
Scenario walkthrough
Scenario 1: Safe sandbox action
const result = await authorizeAgentAction({
requestedAction: "query",
targetEnvironment: "sandbox",
toolCategory: "analytics",
executionAmount: 0,
humanApproved: false,
riskScore: 12,
}, signer, verifier, replayStore);
// result.authorized === true
// attestation.decision.reason === "low_risk_sandbox"
Scenario 2: Production delete blocked
const result = await authorizeAgentAction({
requestedAction: "delete",
targetEnvironment: "production", // matches block_production_delete
toolCategory: "storage",
executionAmount: 0,
humanApproved: false,
riskScore: 25,
}, signer, verifier, replayStore);
// result.authorized === false
// result.pendingReview === true
// attestation.decision.reason === "production_delete_blocked"
// Route to human escalation with attestation.executionId
Scenario 3: High-value financial action without approval
const result = await authorizeAgentAction({
requestedAction: "transfer",
targetEnvironment: "production",
toolCategory: "financial",
executionAmount: 50000, // > 10000 threshold
humanApproved: false, // no approval
riskScore: 45,
}, signer, verifier, replayStore);
// result.authorized === false
// result.pendingReview === true
// attestation.decision.reason === "human_approval_required"
Key security properties
Replay protection: If the same (requestedAction, targetEnvironment, toolCategory, executionAmount, humanApproved, riskScore) tuple is submitted a second time, the governance runtime rejects it. A re-triggered LLM action cannot accidentally execute twice.
Tamper-evident escalation: When an action is escalated for human review, the attestation ID is the audit trail. The human reviewer approves or rejects against a specific executionId — not a vague description of an action.
Policy version audit: The policy that governed every agent action is embedded in every attestation. If the policy changes (new version), historical attestations still reference the version that was active at execution time.