Security Model
Threat model, preventive controls, and architectural guarantees.
Threat Model & Gating
Prompt Injection & Hijacking
Adversarial prompts designed to override system instructions or extract sensitive data.
Mitigation: Isolated Dreamer Sandbox. The Core Kernel treats LLM output as an unprivileged proposal, not a command. Proposal Risk Monitor flags high-variance shifts.
Memory Injection Attacks
Attempts to 'poison' long-term memory with malicious instructions or fake evidence.
Mitigation: Policy-Gated MemoryVault. Memory is treated strictly as evidence/context, never as executable instruction. BM25 deterministic retrieval.
Audit Log Manipulation
Attempts to delete, modify, or forge audit records for forensic evasion.
Mitigation: Cryptographic Chain of Custody. SHA-256 Chaining + HMAC Signatures. Full verify_chain() utility for bit-level integrity reports.
System Loss of Control
Edge cases where the system behavior becomes unpredictable or dangerous.
Mitigation: Atomic Panic Mode. A global kill-switch that blocks all operations except emergency allowlisted recovery probes.
Emergency Governance (Appliance-Grade)
Panic Mode (Hard Stop)
Immediate, system-wide block of all transitions. No loops, no reasoning. The Kernel freezes all authority until a signed recovery probe is provided.
Break-Glass (Bypass)
One-time, strictly targeted administrative overrides. Require Ed25519 signatures and specific correlation IDs. Replay protection enforced.
Core Invariants
- LLM Reasoning vs Core Decision isolation (Consultant vs Sovereign)
- Memory-as-Evidence (No instruction execution from storage)
- Zero-Leak of Chain-of-Thought in public Forensic Ledger
- Deterministic Replay & Hash Chain Integrity
- One-way entropy anchoring for all decision signatures
Non-Goals
AegisAI is a governance appliance, not a general-purpose AI system.
- ✕Not AGI — does not generate creative responses
- ✕Not autonomous without policy — every action requires explicit rule
- ✕Not a chatbot — designed for machine-to-machine governance
Deployment Options
Sidecar
Deploy alongside existing AI systems as policy enforcement layer.
Gateway
API gateway mode for centralized governance across multiple models.
Embedded
SDK integration directly into application code for inline enforcement.