Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Zero-dep runtime enforcement for LLM agents. Budget limits, concurrency gates, degradation control.

License

Notifications You must be signed in to change notification settings

amabito/veronica-core

Repository files navigation

VERONICA

VERONICA is the Execution OS for LLM Systems.

Modern LLM stacks are incomplete.

Prompting. Orchestration. Observability.

They lack runtime containment.

pip install veronica-core

Jump to Quickstart (5 minutes) or browse docs/cookbook.md.


1. The Missing Layer in LLM Stacks

Modern LLM stacks are built around three well-understood components:

  • Prompting — instruction construction, context management, few-shot formatting
  • Orchestration — agent routing, tool dispatch, workflow sequencing
  • Observability — tracing, logging, cost dashboards, latency metrics

What they lack is a fourth component: runtime containment.

Observability != Containment.

An observability stack tells you that an agent spent 12,000ドル over a weekend. It records the retry loops, the token volumes, the timestamp of each failed call. It produces a precise audit trail of a runaway execution.

What it does not do is stop it.

Runtime containment is the component that stops it. It operates before the damage occurs, not after. It enforces structural limits on what an LLM-integrated system is permitted to do at runtime — independent of prompt design, orchestration logic, or model behavior.


2. Why LLM Calls Are Not APIs

LLM calls are frequently treated as ordinary API calls: send a request, receive a response. This framing is incorrect, and the gap between the two creates reliability problems at scale.

Standard API calls exhibit predictable properties:

  • Deterministic behavior for identical inputs
  • Fixed or bounded response cost
  • Safe retry semantics (idempotent by construction)
  • No recursive invocation patterns

LLM calls exhibit none of these:

Stochastic behavior. The same prompt produces different outputs across invocations. There is no stable function to test against. Every call is a sample from a distribution, not a deterministic computation.

Variable token cost. Output length is model-determined, not caller-determined. A single call can consume 4 tokens or 4,000. Budget projections based on typical behavior fail under adversarial or unusual inputs.

Recursive invocation. Agents invoke tools; tools invoke agents; agents invoke agents. Recursion depth is not bounded by the model itself. A single top-level call can spawn hundreds of descendant calls with no inherent termination condition.

Retry amplification. When a component fails under load, exponential backoff retries compound across nested call chains. A failure rate of 5% per layer, across three layers, does not produce a 15% aggregate failure rate — it produces amplified retry storms that collapse throughput.

Non-idempotent retries. Retrying an LLM call is not guaranteed to be safe. Downstream state mutations, external tool calls, and partial execution all make naive retry semantics dangerous.

LLM calls are probabilistic, cost-generating components. They require structural bounding. They cannot be treated as deterministic, cost-stable services.


3. What Runtime Containment Means

Runtime containment is a constraint layer that enforces bounded behavior on LLM-integrated systems.

It does not modify prompts. It does not filter content. It does not evaluate output quality. It enforces operational limits on the execution environment itself — evaluated at call time, before the model is invoked.

A runtime containment layer enforces:

  1. Bounded cost — maximum token spend and call volume per window, per entity, per system
  2. Bounded retries — rate limits and amplification controls that prevent retry storms from escalating
  3. Bounded recursion — per-entity circuit-breaking that terminates runaway loops regardless of orchestration logic
  4. Bounded wait states — isolation of stalled or degraded components from the rest of the system
  5. Failure domain isolation — structural separation between a failing component and adjacent components, with auditable evidence

VERONICA implements these five properties as composable, opt-in primitives.


4. Containment Layers in VERONICA

Layer 1 — Cost Bounding

In distributed systems, resource quotas enforce hard limits on consumption per tenant, per service, per time window. Without them, a single runaway process exhausts shared resources.

LLM systems face the same problem at the token and call level. Without cost bounding, a single agent session can consume unbounded token volume with no mechanism to stop it.

VERONICA components:

  • BudgetWindowHook — enforces a call-count ceiling within a sliding time window; emits DEGRADE before the ceiling is reached, then HALT at the ceiling
  • TokenBudgetHook — enforces a cumulative token ceiling (output tokens or total tokens) with a configurable DEGRADE zone approaching the limit
  • TimeAwarePolicy — applies time-based multipliers (off-hours, weekends) to reduce active ceilings during periods of lower oversight
  • AdaptiveBudgetHook — adjusts ceilings dynamically based on observed SafetyEvent history; stabilized with cooldown windows, per-step smoothing, hard floor and ceiling bounds, and direction lock

Layer 2 — Amplification Control

In distributed systems, retry amplification is a well-documented failure mode: a component under pressure receives more retries than it can handle, which increases pressure, which triggers more retries. Circuit breakers and rate limiters exist to interrupt this dynamic.

LLM systems exhibit the same failure mode. A transient model error triggers orchestration retries. Each retry may invoke tools, which invoke the model again. The amplification is geometric.

VERONICA components:

  • BudgetWindowHook — the primary amplification control; a ceiling breach halts further calls regardless of upstream retry logic or backoff strategy
  • DEGRADE decision — signals fallback behavior before hard stop, allowing graceful degradation (e.g., model downgrade) rather than binary failure
  • Anomaly tightening (AdaptiveBudgetHook) — detects spike patterns in SafetyEvent history and temporarily reduces the effective ceiling during burst activity, with automatic recovery when the burst subsides

Layer 3 — Recursive Containment

In distributed systems, recursive or cyclic call graphs require depth bounds or visited-node tracking to prevent infinite traversal. Without them, any recursive structure is a potential infinite loop.

LLM agents are recursive by construction: tool calls invoke the model; the model invokes tools. The recursion is implicit in the orchestration design, not explicit in any single call.

VERONICA components:

  • VeronicaStateMachine — tracks per-entity fail counts; activates COOLDOWN state after a configurable number of consecutive failures; transitions to SAFE_MODE for system-wide halt
  • Per-entity cooldown isolation — an entity in COOLDOWN is blocked from further invocations for a configurable duration; this prevents tight loops on failing components without affecting other entities
  • ShieldPipeline — composable pre-dispatch hook chain; all registered hooks are evaluated in order before each LLM call; any hook may emit DEGRADE or HALT

Layer 4 — Stall Isolation

In distributed systems, a stalled downstream service causes upstream callers to block on connection pools, exhaust timeouts, and degrade responsiveness across unrelated request paths. Bulkhead patterns and timeouts exist to contain stall propagation.

LLM systems stall when a model enters a state of repeated low-quality, excessively verbose, or non-terminating responses. Without isolation, a stalled model session propagates degradation upstream.

VERONICA components:

  • VeronicaGuard — abstract interface for domain-specific stall detection; implementations inspect latency, error rate, response quality, or any domain signal to trigger immediate cooldown activation, bypassing the default fail-count threshold
  • Per-entity cooldown (VeronicaStateMachine) — stall isolation is per entity; a stalled tool or agent does not trigger cooldown for entities with clean histories
  • MinimalResponsePolicy — opt-in system-message injection that enforces output conciseness constraints, reducing the probability of runaway token generation from verbose model states

Layer 5 — Failure Domain Isolation

In distributed systems, failure domain isolation ensures that a fault in one component does not propagate to adjacent components. Structured error events, circuit-state export, and tiered shutdown protocols are standard mechanisms for this.

LLM systems require the same. A component failure should produce structured evidence, enable state inspection, and permit controlled shutdown without corrupting adjacent execution state.

VERONICA components:

  • SafetyEvent — structured evidence record for every non-ALLOW decision; contains event type, decision, hook identity, and SHA-256 hashed context; raw prompt content is never stored
  • Deterministic replay — control state (ceiling, multipliers, adjustment history) can be exported and re-imported; enables observability dashboard integration and post-incident reproduction
  • InputCompressionHook — gates oversized inputs before they reach the model; HALT on inputs exceeding the ceiling, DEGRADE with compression recommendation in the intermediate zone
  • VeronicaExit — three-tier shutdown protocol (GRACEFUL, EMERGENCY, FORCE) with SIGTERM and SIGINT signal handling and atexit fallback; state is preserved where possible at each tier

5. Architecture Overview

VERONICA operates as a middleware constraint layer between the orchestration layer and the LLM provider. It does not modify orchestration logic. It enforces constraints on what the orchestration layer is permitted to dispatch downstream.

App
 |
 v
Orchestrator
 |
 v
Runtime Containment (VERONICA)
 |
 v
LLM Provider

Each call from the orchestrator passes through the ShieldPipeline before reaching the provider. The pipeline evaluates registered hooks in order. Any hook may emit DEGRADE or HALT. A HALT decision terminates the call and emits a SafetyEvent. The orchestrator receives the decision and handles it according to its own logic.

VERONICA does not prescribe how the orchestrator responds to DEGRADE or HALT. It enforces that the constraint evaluation occurs, that the decision is recorded as a structured event, and that the call does not proceed past a HALT decision.


6. OSS and Cloud Boundary

veronica-core is the local containment primitive library. It contains all enforcement logic: ShieldPipeline, BudgetWindowHook, TokenBudgetHook, AdaptiveBudgetHook, TimeAwarePolicy, InputCompressionHook, MinimalResponsePolicy, VeronicaStateMachine, SafetyEvent, VeronicaExit, and associated state management.

veronica-core operates without network connectivity, external services, or vendor dependencies. All containment decisions are local and synchronous.

veronica-cloud (forthcoming) provides coordination primitives for multi-agent and multi-tenant deployments: shared budget pools, distributed policy enforcement, and real-time dashboard integration for SafetyEvent streams.

The boundary is functional: cloud enhances visibility and coordination across distributed deployments. It does not enhance safety. Safety properties are enforced by veronica-core at the local layer. An agent running without cloud connectivity is still bounded. An agent running without veronica-core is not.


7. Design Philosophy

VERONICA is not:

  • Observability — it does not trace, log, or visualize execution after the fact
  • Content guardrails — it does not inspect, classify, or filter prompt or completion content
  • Evaluation tooling — it does not assess output quality, factual accuracy, or alignment properties

VERONICA is:

  • Runtime constraint enforcement — hard and soft limits on call volume, token spend, input size, and execution state, evaluated before each LLM call
  • Systems-level bounding layer — structural containment at the orchestration boundary, treating LLM calls as probabilistic, cost-generating components that require bounding

The design is deliberately narrow. A component that attempts to solve observability, guardrails, containment, and evaluation simultaneously solves none of them well. VERONICA solves containment.


Quickstart (5 minutes)

Install

pip install veronica-core

Minimal runtime containment example

from veronica_core import ExecutionContext, ExecutionConfig, WrapOptions
def simulated_llm_call(prompt: str) -> str:
 return f"response to: {prompt}"
config = ExecutionConfig(
 max_cost_usd=1.00, # hard cost ceiling per chain
 max_steps=50, # hard step ceiling
 max_retries_total=10,
 timeout_ms=0,
)
with ExecutionContext(config=config) as ctx:
 for i in range(3):
 decision = ctx.wrap_llm_call(
 fn=lambda: simulated_llm_call(f"prompt {i}"),
 options=WrapOptions(
 operation_name=f"generate_{i}",
 cost_estimate_hint=0.04,
 ),
 )
 if decision.name == "HALT":
 break
snap = ctx.get_graph_snapshot()
print(snap["aggregates"])

Expected output

{
 "total_cost_usd": 0.12,
 "total_llm_calls": 3,
 "total_tool_calls": 0,
 "total_retries": 0,
 "max_depth": 1,
 "llm_calls_per_root": 3.0,
 "tool_calls_per_root": 0.0,
 "retries_per_root": 0.0,
 "divergence_emitted_count": 0
}

This demonstrates runtime containment as a structural property: every call is recorded into an execution graph, amplification is measurable at the chain level, and HALT semantics are deterministic and auditable per node.

What each part does

  • ExecutionConfig — declares hard limits for the chain (cost, steps, retries, timeout)
  • ExecutionContext — scopes one agent run or request chain; enforces limits at dispatch time
  • wrap_llm_call() — records the call as a typed node; evaluates all containment conditions before dispatch
  • get_graph_snapshot() — returns an immutable, JSON-serializable view of the execution graph

Enforce a step ceiling

from veronica_core import ExecutionContext, ExecutionConfig, WrapOptions
from veronica_core.shield.types import Decision
config = ExecutionConfig(max_cost_usd=10.0, max_steps=5, max_retries_total=20, timeout_ms=0)
with ExecutionContext(config=config) as ctx:
 for i in range(10):
 decision = ctx.wrap_llm_call(
 fn=lambda: "result",
 options=WrapOptions(operation_name=f"step_{i}"),
 )
 if decision == Decision.HALT:
 print(f"Halted at step {i}")
 break

What to read next


Records: every LLM and tool call as a typed node in an execution graph. Never stores: prompt contents. Evidence uses SHA-256 hashes by default.


AIcontainer (v0.9.2)

AIcontainer is a declarative execution boundary that composes veronica-core primitives into a single container object. Use it when you want to declare all boundaries upfront instead of wiring primitives individually.

from veronica_core.container import AIcontainer
from veronica_core import BudgetEnforcer, CircuitBreaker, RetryContainer
container = AIcontainer(
 budget=BudgetEnforcer(limit_usd=10.0),
 circuit_breaker=CircuitBreaker(failure_threshold=3),
 retry=RetryContainer(max_retries=2),
)
decision = container.check(cost_usd=0.5)
if not decision.allowed:
 raise RuntimeError(f"Boundary violated: {decision.reason}")
print(container.active_policies) # ['budget', 'circuit_breaker', 'retry_budget']

All arguments are optional. Pass only the boundaries you need. Existing imports (from veronica_core import BudgetEnforcer) are unchanged.


veronica_guard — Decorator Injection (v0.9.3)

veronica_guard wraps any callable in an AIcontainer boundary without changing the call site.

from veronica_core.inject import veronica_guard, VeronicaHalt
@veronica_guard(max_cost_usd=1.0, max_steps=20, max_retries_total=3)
def call_llm(prompt: str) -> str:
 return llm.complete(prompt)
try:
 result = call_llm("Hello")
except VeronicaHalt as e:
 print(f"Denied: {e.reason}")

To return the PolicyDecision instead of raising:

@veronica_guard(max_cost_usd=1.0, return_decision=True)
def call_llm(prompt: str):
 return llm.complete(prompt)
result = call_llm("Hello")
if isinstance(result, PolicyDecision):
 # policy denied — handle gracefully
 ...

Use is_guard_active() to detect an active boundary from inside a call:

from veronica_core.inject import is_guard_active
def my_tool():
 if is_guard_active():
 # running inside a veronica_guard boundary
 ...

patch_openai / patch_anthropic — Automatic SDK Injection (v0.9.4)

Opt-in SDK patching applies @veronica_guard policies automatically to every OpenAI or Anthropic API call made inside a guard boundary — no per-call changes required.

from veronica_core import veronica_guard
from veronica_core.patch import patch_openai
# Activate once at application startup.
# Safe to call if openai is not installed.
patch_openai()
@veronica_guard(max_cost_usd=1.0, max_steps=20)
def call_llm(prompt: str) -> str:
 from openai import OpenAI
 client = OpenAI()
 response = client.chat.completions.create(
 model="gpt-4o",
 messages=[{"role": "user", "content": prompt}],
 )
 return response.choices[0].message.content
# Budget is checked before the OpenAI call.
# Token cost is recorded against the budget after each response.
result = call_llm("Hello!")

Guarantees:

  • Calls outside a @veronica_guard boundary pass through unchanged.
  • Neither openai nor anthropic is a required dependency.
  • unpatch_all() restores all originals (useful in tests).

VeronicaCallbackHandler — LangChain Integration (v0.9.5)

Enforce VERONICA policies in LangChain pipelines via the standard callback interface. No changes to existing call sites required.

from langchain_openai import ChatOpenAI
from veronica_core.adapters.langchain import VeronicaCallbackHandler
from veronica_core import GuardConfig
handler = VeronicaCallbackHandler(GuardConfig(max_cost_usd=1.0, max_steps=20))
llm = ChatOpenAI(callbacks=[handler])
# Budget is checked before each LLM call.
# Token cost is recorded and steps counted after each response.
response = llm.invoke("Hello!")

Also works with ExecutionConfig:

from veronica_core.containment import ExecutionConfig
handler = VeronicaCallbackHandler(
 ExecutionConfig(max_cost_usd=5.0, max_steps=50, max_retries_total=10)
)

Guarantees:

  • VeronicaHalt raised on policy denial, halting the LangChain chain.
  • Steps accumulate across the handler's lifetime (reset via handler.container.reset()).
  • langchain-core or langchain must be installed separately.
  • Importing veronica_core without langchain installed is safe.

SemanticLoopGuard — Semantic Loop Detection (v0.9.6)

Detect when an LLM produces semantically repetitive outputs using pure-Python word-level Jaccard similarity — no heavy ML dependencies required.

from veronica_core import SemanticLoopGuard, AIcontainer
guard = SemanticLoopGuard(
 window=3, # rolling window size
 jaccard_threshold=0.92, # similarity above this → deny
 min_chars=80, # skip short outputs to avoid false positives
)
# Attach to AIcontainer
container = AIcontainer(semantic_guard=guard)
# Or use standalone
result = guard.feed("The answer is 42. " * 5) # record + check
if not result.allowed:
 print(f"Loop detected: {result.reason}")

How it works:

  • Maintains a rolling buffer of recent outputs (up to window entries)
  • Normalizes text (lowercase, whitespace collapse) before comparison
  • Exact-match shortcut for O(1) identical output detection
  • Pairwise Jaccard similarity check on word frozensets
  • Outputs shorter than min_chars characters are skipped
# Manual record/check API
guard.record("first llm output here...")
guard.record("second llm output here...")
decision = guard.check() # PolicyDecision(allowed=bool, ...)
# Reset the buffer
guard.reset()

Ship Readiness (v0.9.6)

  • BudgetWindow stops runaway execution (ceiling enforced)
  • SafetyEvent records structured evidence for non-ALLOW decisions
  • DEGRADE supported (fallback at threshold, HALT at ceiling)
  • TokenBudgetHook: cumulative output/total token ceiling with DEGRADE zone
  • MinimalResponsePolicy: opt-in conciseness constraints for system messages
  • InputCompressionHook: real compression with Compressor protocol + safety guarantees (v0.5.1)
  • AdaptiveBudgetHook: auto-adjusts ceiling based on SafetyEvent history (v0.6.0)
  • TimeAwarePolicy: weekend/off-hours budget multipliers (v0.6.0)
  • Adaptive stabilization: cooldown, smoothing, floor/ceiling, direction lock (v0.7.0)
  • Anomaly tightening: spike detection with temporary ceiling reduction (v0.7.0)
  • Deterministic replay: export/import control state for observability (v0.7.0)
  • ExecutionGraph: first-class runtime execution graph with typed node lifecycle (v0.9.0)
  • Amplification metrics: llm_calls_per_root, tool_calls_per_root, retries_per_root (v0.9.0)
  • Divergence heuristic: repeated-signature detection, warn-only, deduped (v0.9.0)
  • AIcontainer: declarative execution boundary composing all runtime primitives (v0.9.1)
  • PolicyEngine: declarative DENY/REQUIRE_APPROVAL/ALLOW rule set (v0.9.1)
  • AuditLog: append-only JSONL with SHA-256 hash chain + secret masking (v0.9.1)
  • Policy signing: HMAC-SHA256 + ed25519 tamper detection (v0.9.1)
  • CI: release workflow secrets guard fixed (v0.9.2)
  • veronica_guard: decorator-based injection with contextvars guard detection (v0.9.3)
  • patch_openai / patch_anthropic: opt-in SDK patching with guard-context awareness (v0.9.4)
  • VeronicaCallbackHandler: LangChain adapter with pre/post-call policy enforcement (v0.9.5)
  • SemanticLoopGuard: pure-Python word-level Jaccard loop detection, integrated into AIcontainer (v0.9.6)
  • PyPI auto-publish on GitHub Release
  • Everything is opt-in & non-breaking (default behavior unchanged)

1120 tests passing. Minimum production use-case: runaway containment + graceful degrade + auditable events + token budgets + input compression + adaptive ceiling + time-aware scheduling + anomaly detection + execution graph + divergence detection + security containment layer + semantic loop detection.


Roadmap

v0.9.x

  • OpenTelemetry export (opt-in SafetyEvent export to standard spans)
  • Middleware mode (ASGI/WSGI integration)
  • Distributed budget coordination (Redis-backed shared pools)
  • Improved divergence heuristics (cost-rate, token-velocity)

v1.0

  • Stable ExecutionContext API with formal deprecation policy
  • Formal containment guarantee documentation
  • ExecutionGraph extensibility hooks for external integrations
  • Multi-agent containment primitives (shared budget, cross-chain circuit breaker)

Install

pip install -e .
# With dev tools
pip install -e ".[dev]"
pytest

CI Coverage Python


Containment Mode

VERONICA's Security Containment Layer provides a fail-closed enforcement boundary that stops dangerous agent actions at the tool-dispatch and egress level — independently of any upper-layer system prompt or agent rules. Even if an adversarial prompt fully bypasses the agent's own guidelines, the containment layer intercepts the call and returns Decision.HALT or Decision.QUARANTINE before any action reaches the OS.

The layer enforces: uncontrolled shell execution, sensitive file reads (.env, SSH keys, cloud credentials), unauthenticated outbound POST/PUT/DELETE requests, CI workflow file modifications, and runaway agents that repeatedly probe blocked actions (risk accumulation → automatic SAFE_MODE transition).

Phase G/H additions:

  • Policy signing (G-1): policies/default.yaml is HMAC-SHA256 signed. Set VERONICA_POLICY_KEY (hex) for the production signing key. Any tampered policy file raises RuntimeError at load time.
  • Supply chain guard (G-2): pip install, npm install, pnpm add, yarn add, uv add, cargo add, and lock file writes all route to REQUIRE_APPROVAL before execution. Run python tools/generate_sbom.py to produce a sbom.json inventory of installed packages.
  • Runner attestation (G-3): AttestationChecker captures username, platform, Python path, CWD, and UID at startup and detects mid-session anomalies (container escape, privilege escalation).
  • Approval fatigue mitigation (H): ApprovalBatcher deduplicates identical approval requests; ApprovalRateLimiter caps throughput at 10 approvals/60 s by default.

Phase I additions:

  • Policy signing v2 (I-1): PolicySignerV2 upgrades from HMAC-SHA256 (symmetric) to ed25519 asymmetric signing via the cryptography package. Only the public key is committed; the private key stays in a secrets manager. PolicyEngine checks .sig.v2 first; falls back to v1 HMAC if cryptography is not installed. See docs/SIGNING_GUIDE.md for the full workflow.
  • Active sandbox probe (I-2): SandboxProbe actively attempts a protected filesystem read and an outbound network request to verify that sandbox restrictions are actually enforced — not just configured. Any probe that succeeds (sandbox failed to block) emits SANDBOX_PROBE_FAILURE and the caller must trigger SAFE_MODE.
  • SBOM diff gate (I-3): tools/sbom_diff.py compares two SBOM snapshots and exits non-zero on any package addition, removal, or version change. An HMAC-SHA256 approval token allows pre-approved diffs to pass the gate without manual intervention. Integrates into CI as a required check on lock file / pyproject.toml changes.

Phase J additions:

  • Security levels (J-1): Auto-detected DEV / CI / PROD tiers control how strictly cryptographic requirements are enforced. Set VERONICA_SECURITY_LEVEL to override; CI/PROD require the cryptography package and a valid signature. See src/veronica_core/security/security_level.py.
  • Key pinning (J-2): KeyPinChecker verifies that the loaded ed25519 public key matches the committed SHA-256 pin in policies/key_pin.txt (or VERONICA_KEY_PIN env var). A mismatch in CI/PROD raises RuntimeError. See docs/KEY_ROTATION.md for the rotation workflow.
  • Policy rollback protection (J-3): RollbackGuard scans the audit log backward to detect if an older policy_version has been submitted after a newer one (rollback attack). policy_checkpoint events enable fast startup by bounding the backward scan.
  • xfailed registry (J-4): All xfail-marked tests are documented in docs/XFAILED_REGISTRY.md with a safety risk assessment. No test with High safety risk may remain as a permanent xfail.

Quick start — wire PolicyHook into ShieldPipeline:

from veronica_core.security import PolicyEngine, PolicyHook, CapabilitySet
from veronica_core.shield.pipeline import ShieldPipeline
from veronica_core.shield.config import ShieldConfig
engine = PolicyEngine()
hook = PolicyHook(
 engine=engine,
 caps=CapabilitySet.ci(), # restrict to CI capability profile
 working_dir="/repo",
 repo_root="/repo",
 env="ci",
)
pipeline = ShieldPipeline(
 config=ShieldConfig(),
 tool_dispatch=[hook], # ToolDispatchHook: blocks dangerous tool calls
 egress=[hook], # EgressBoundaryHook: blocks outbound HTTP
)

For full architecture details, audit findings coverage, capability profiles, and custom policy configuration, see docs/SECURITY_CONTAINMENT_PLAN.md.


Red Team Regression

VERONICA includes a permanent regression suite of 20 attack scenarios covering the most common techniques an adversarial agent or prompt-injected payload would attempt.

Every scenario is blocked by a specific containment rule — the test suite verifies this on every CI run.

uv run pytest tests/redteam/ -v

Coverage

Category Scenarios Description
Exfiltration 5 HTTP POST, base64/hex GET encoding, high-entropy query, long URL
Credential Hunt 5 .env, .npmrc, id_rsa, .pem, git credential helper
Workflow Poisoning 5 CI file write, git push, npm token, pip config, exec() bypass
Persistence 5 Shell destruction, token replay, expired token, scope mismatch, sandbox traversal

All 20 scenarios: blocked.

For the full scenario table, rule IDs, and architecture details, see docs/SECURITY_CONTAINMENT_PLAN.md#phase-f.


Security Guarantees

The following guarantees are verified by the VERONICA test suite on every CI run. The full verifiable claim set is documented in docs/SECURITY_CLAIMS.md.

Containment (20 red-team scenarios — all blocked)

Category Claims Pytest coverage
Exfiltration HTTP POST, base64/hex encoding, high-entropy query, long URL tests/redteam/
Credential Hunt .env, SSH keys, .pem, npm/pip tokens tests/redteam/
Workflow Poisoning CI file write, git push, exec() bypass tests/redteam/
Persistence Token replay, sandbox traversal, scope mismatch tests/redteam/

Cryptographic Integrity

Guarantee Mechanism Pytest mapping
Policy files are signed Ed25519 (v2) + HMAC-SHA256 (v1 fallback) tests/security/test_policy_signing.py
Public key is pinned SHA-256 pin in policies/key_pin.txt tests/security/test_key_pin.py
Policy rollback is detected RollbackGuard checks policy_version monotonicity tests/security/test_policy_rollback.py
Release artifacts are verified tools/verify_release.py exits 0 tests/tools/test_release_tools.py

Threat Model Coverage

Threat Defence Phase
Prompt-injected tool calls PolicyEngine DENY rules G
Supply chain compromise SBOM diff gate + approval token I
Key substitution Key pinning (J-2) + CI enforcement J
Policy tampering Ed25519 sig verification at load I
Rollback attack RollbackGuard monotonic version check J
Privilege escalation AttestationChecker mid-session anomaly G

Full threat model: docs/THREAT_MODEL.md


Version History

See CHANGELOG.md for version history.


License

MIT


Runtime Containment is the missing layer in LLM infrastructure.

About

Zero-dep runtime enforcement for LLM agents. Budget limits, concurrency gates, degradation control.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

AltStyle によって変換されたページ (->オリジナル) /