Sifaka is an open-source framework that adds reflection and reliability to large language model (LLM) applications.
-
Updated
Dec 4, 2025 - Python
Sifaka is an open-source framework that adds reflection and reliability to large language model (LLM) applications.
A long-form article and practical framework for designing machine learning systems that warn instead of decide. Covers regimes vs decimals, levers over labels, reversible alerts, anti-coercion UI patterns, auditability, and the "Warning Card" template, so ML preserves human agency while staying useful under uncertainty.
This project integrates Hyperledger Fabric with machine learning to enhance transparency and trust in data-driven workflows. It outlines a blockchain-based strategy for data traceability, model auditability, and secure ML deployment across consortium networks.
Proof of Human Intent (PoHI) - Cryptographically verifiable human approval for AI-driven development
A framework that makes AI research transparent, traceable, and independently verifiable.
Governance beneath the model. Custody before trust. Open for audit. Constitutional Grammar for Multi-Model AI Federations, Firmware Specification β’ Zero-Touch Alignment β’ Public Release v1.0
Ethics-first, transparent AI. Built to be self-hosted by anyone.
BLUX-cA β Clarity Agent core of the BLUX ecosystem. A constitutional, audit-driven AI helm that interprets intent, enforces governance, and routes tasks safely across models, tools, and agents.
A principal-level framework for governing AI-assisted decisions with accountability, auditability, and risk controls.
Digital Native Institutions and the National Service Unit: a formal, falsifiable architecture for protocol-governed institutional facts and next-generation public administration.
A governed system for translating applied AI research into auditable, decision-ready artifacts.
Self-auditing governance framework that turns contradictions into verifiable, adaptive intelligence.
Not new AI, but accountable and auditable AI
Winmem keeps Solana projects alive without maintainers.
omphalOS turns strategic trade-and-technology analyses into tamper-evident, "run packets" for inspector, counsel, and oversight review.
Deterministic, auditable ethical decision engine implementing the Sovereign Ethics Algebra (SEA).
Governance, architecture, and epistemic framework for the Aurora Workflow Orchestration ecosystem (AWO, CRI-CORE, and scientific case studies).
Core Specification for the Audit-by-Design DSL - Human- and machine-readable domain-specific language (DSL) for defining, validating, and auditing atomic requirements (AFOs) in regulated software environments. Open specification, free to use and extend.
Governance standard for public-interest analysis: civic integrity floors, boundary alerts, and high-risk decision gates to keep civic work neutral, auditable, and non-destructive.
π₯ Emergent intelligence in autonomous trading agents through evolutionary algorithms. Testing zero-knowledge learning in cryptocurrency markets. Where intelligence emerges, not designed.
Add a description, image, and links to the auditability topic page so that developers can more easily learn about it.
To associate your repository with the auditability topic, visit your repo's landing page and select "manage topics."