top of page
coach.png

Meet the RAI Coach in our v2 App

A coach you can talk to
The v2 App now comes with a built-in guide: the Responsible AI (RAI) Coach. It’s a coach you can talk to—about data use, sustainability, fairness, or security—right where you work with AI.
The Responsible AI (RAI) Coach is built into the v2 App’s learning space. Ask it about any feature—CO2 Tracker, OratAI, prompts, data use, security setup—and it explains the responsible AI view: what good practice looks like, what trade-offs exist, and how your choices map to our framework.

Layer 1 at work: the Axiom Surface
The Coach is powered by Layer 1 of our Responsible AI Framework — a compact set of axioms that act as anchors for responsible AI conversations. These axioms are mapped to our four-tier scaffold, which means that when you chat with the Coach, it doesn’t just tell you what a feature does, but helps you understand why it matters across different dimensions:

  • Tier 1 — Truth & Meta-ethics: How do we know this claim or output is warranted? What counts as reliable or coherent here?

  • Tier 2 — Legitimacy: Does the way this feature works respect values like autonomy, fairness, and proportionality?

  • Tier 3 — Legality: Which legal or regulatory duties are relevant (e.g. transparency, data protection, disclosure)?

  • Tier 4 — Practical Context: How does the domain, audience, or use case shape what “responsible” looks like — and what fallback or oversight is needed?


Instead of handing down rules, the Coach uses these tiers to structure an explanation and dialogue. That way, you can explore Horizon’s features in a way that’s transparent, principled, and connected to the bigger framework.

Sustainability you can see: CO₂ guidance in the flow
AI isn’t immaterial. Every prompt, image, or model run has an environmental cost — but users rarely see it. The Coach makes this impact explainable and discussable.
 
When you ask about sustainability, the Coach can unpack it across the four tiers:

  • Tier 1 — Truth & Meta-ethics: Why emissions data matter as truth-apt evidence for intergenerational justice.

  • Tier 2 — Legitimacy: How precaution and proportionality guide us toward lower-impact, functionally comparable choices.

  • Tier 3 — Legality: Where climate reporting duties and sustainability standards (like SDG 13, CSRD) enter the picture.

  • Tier 4 — Practical Context: How this connects directly to your experience in Horizon — via our AI CO₂ tracking device, which visualises per-action emissions, provides equivalence comparisons (e.g. “this equals running a bulb for 10 minutes”), and suggests practical steps like batching requests or choosing lighter models.


The Coach doesn’t just report numbers. It lets you converse about your own emissions:

  • “What’s my footprint so far this week?”

  • “Which features cost me the most CO₂?”

  • “How can I cut that in half without losing quality?”


By grounding the principle of sustainability in the scaffold and linking it to a live tool (the CO₂ tracker), the Coach makes environmental responsibility a personal, interactive part of everyday AI use.


What you can ask the Coach (examples)

  • Data & privacy: “How does Horizon handle my inputs here—and what should I disclose to users?”

  • CO₂ in practice: “Estimate the footprint of generating five images and suggest ways to reduce it.”

  • Context checks: “Does this differ for students vs. employees?”

  • Audit view: “Show me the rationale at an ‘auditor’ depth.”


Why it matters
Responsible AI shouldn’t live in PDFs. The RAI Coach turns principles into everyday practice—right where decisions happen. It helps teams move from “What does the policy say?” to “What should I do here, now, with this feature and these users?”

What’s live today

  • Conversational RAI guidance across v2 features.

  • Layer-1-based answers mapped to the four tiers for transparent reasoning.

  • Sustainability insights with CO₂ estimates, comparisons, and practical reduction tips.

 
What’s next

  • Fine-tuned model on Layer 1: We will train a specialized model directly on our Layer-1 axioms and justifications, so the Coach can reason with even greater consistency and depth.

  • Deeper scenarios and role-based playbooks inside the learning space.

  • Expanded sustainability views with more granular suggestions.

  • Broader coverage as additional layers of the framework are published.


The RAI Coach is here to make responsible AI clear, practical, and usable—one conversation at a time.
 

bottom of page