Model risk
Bias, hallucinations, drift, lack of explainability, opaque outputs.
Our frameworks do not replace the EU AI Act, NIST AI RMF, or ISO 42001. They make them operable at board level. They can be deployed alone or together, depending on the need.
A structured diagnostic of a board's cognitive maturity facing AI. Five dimensions, twenty‑eight criteria, one consolidated score.
Measuring what a board knows, understands, demands, decides — and documents.
Dimension 1 — Competencies. The board's cognitive coverage: translators, ethicists, AI operators, researchers. Surfaces blind spots.
Dimension 2 — Process. AI's inscription in the agenda, the frequency of review, the quality of reporting received.
Dimension 3 — Deliberative culture. Productive friction, room for disagreement, relationship to the machine in session.
Dimension 4 — Exposure. Material AI risks of the company and the quality of their supervision.
Dimension 5 — Trace. Documented traceability of board demand (Caremark‑grade evidence).
A proprietary taxonomy that complements existing AI risk frameworks by adding the cognitive risks specific to board deliberation.
Bias, hallucinations, drift, lack of explainability, opaque outputs.
Cognitive abdication through "the machine said so"; dilution of the human signature on the decision.
Widening information gap between executives who master the models and a board that only receives outputs.
Higher decision frequency and cadence; shrinking windows for serious deliberation.
Gradual atrophy of the collective ability to argue and diverge once "the model has spoken".
Compatible with ERM (COSO, ISO 31000), NIST AI RMF, EU AI Act.
An AI governance charter built for boards. Shorter and sharper than market checklists. Designed to be voted in session.
"The board signs what it has actually deliberated. Not what the machine has produced."
The charter fits in seven short articles:
I. Scope of AI systems under board supervision.
II. Information standard expected from management.
III. Cadence and review bodies.
IV. Doctrine for the board's own use of AI.
V. Incident framework and escalation thresholds.
VI. Training and AI literacy policy.
VII. External reporting: shareholders, regulators, stakeholders.
Compatible with EU AI Act obligations, aligned with the NIST AI RMF, and usable as proof of oversight (Caremark‑grade).
A proprietary measure of board deliberation quality. Because a board that converges too fast is no longer a place of judgment.
Share of meetings where substantial disagreement was explicitly formulated and documented.
Gap between the number of options actually explored and the option finally chosen.
Time allocated to deliberation on material decisions, beyond management presentation.
The index is calibrated not to penalize speed — it penalizes the illusion of deliberation. A board can decide fast and score high. A board that "swallows" decisions without friction never will.
The skills matrix belongs to another era. The Composition Map adds the cognitive layer: four director profiles to balance.
Bilingual tech / business. Ex‑CTO, CDO or senior advisor. Challenges AI architecture without getting lost in it.
Academic, former regulator or advanced jurist. Surfaces biases, fundamental rights and externalities.
Has scaled an organization on AI. Knows the cost of going to production. Challenges strategy from experience.
Recognized academic or public voice. Custodian of intellectual rigor, long horizon, public legitimacy.
Balance matters more than the individual presence of each profile: a board can be complete with one well‑chosen new director, or remain incomplete with four poorly articulated ones.
Deployment runs through a mission (M.01 to M.04) or a strategic offsite. First conversation in 45 minutes.