top of page

Responsible AI Framework
Our vision: A four-tier normative scaffold, implemented through a hybrid reasoning architecture.
Why this framework
AI systems don’t just compute and represent knowledge; they enact norms. Our framework makes those norms explicit, auditable, and justifiable — from first principles to runtime behavior.
The Four-Tier Normative Scaffold
-
Tier 1 – Truth & Meta-ethics
What counts as a warranted normative claim (structural normativism, rational acceptability, coherence). -
Tier 2 – Legitimacy
Substantive ethical and moral justification (freedom, equality, autonomy, proportionality, sustainability). -
Tier 3 – Legality
Source-based validity and enforceable rights (e.g., EU AI Act, GDPR; conflict-of-laws → stricter safeguard). -
Tier 4 – Practical Context
Domain facts, actors, constraints, and rollback plans that shape implementation.
This scaffold will be invariant in structure, yet open to revision through evidence (reflective equilibrium) with changes recorded and justified.
The Hybrid Reasoning Architecture
Three Knowledge Layers + One Runtime Adapter translate the scaffold into operational decisions:
Layer 1 — Static Rule Set (Axiom Surface)
-
Minimal, actionable axioms; each in its own .yaml with token ID, one-line gloss, tier mapping, Layer-2 links, and SHACL-compliant metadata.
Layer 2 — Justification & Knowledge Graph
-
Typed graph linking axioms to ethical rationales, legal cites, use-cases, dissent, and incident postmortems (S1–S4).
Layer 3 — Evidence Corpus (Document Store)
-
Versioned primary sources (law, rights texts, papers, audits, model cards, telemetry).
Adapter Layer R — Runtime LLM Adapter (LoRA/QLoRA)
-
Roles:
-
Fine Tuned RAI Coach: Surfaces relevant axioms, queries Layer 2/3, composes tier-aware answers.
-
Guard Model: Calibrates content safety; posts verdicts to policy engine or moderation queue.
-
What’s live today
-
Layer 1 is implemented in our Responsible AI Coach.
-
RAI Coach converses based on axiom set.
First Use Case: Sustainability (cross-tier constraint)
AI has real environmental costs, though they’re often hidden. We embed sustainability as a responsibility that runs through all four tiers:
-
Tier 1 — Truth & Meta-ethics: Environmental integrity and intergenerational justice are truth-apt norms; lifecycle CO₂ evidence is part of what makes claims responsible.
-
Tier 2 — Legitimacy: Prefer lower-impact, functionally comparable options, guided by precaution and proportionality; monitor rebound effects.
-
Tier 3 — Legality: Align with global and regional obligations (UNFCCC, SDG 13, CSRD), with clear chains of accountability.
-
Tier 4 — Practical Context: In the Horizon app, the AI CO₂ Tracker is used in the education domain. Students can converse with the Coach about their own emissions, reflect on equivalence data, and learn how responsible choices reduce impact.
Roadmap
-
Layer 2 expansion: richer argument families, dissent tracking, incident auto-linking.
-
Layer 3 growth: audited law/policy corpora, telemetry schemas, auditor-ready exports (CSV/JSON).
-
Adapter-R development: We aim at building a finetuned RAI Coach and guard models.
bottom of page
