Guardian · Nordic AI Integrity

Continuous monitoring for high-risk AI systems under the EU AI Act

Guardian is a continuous monitoring and auditability platform for high-risk AI systems under the EU AI Act. It helps compliance, risk, legal, and AI teams see what their AI is doing in production, maintain audit-ready evidence, and respond faster when regulators or internal auditors ask questions.

  • See risk, drift, fairness, incidents, and oversight signals in one place
  • Maintain an evidence trail that is ready for internal review and regulatory questions
  • Start with one high-risk AI system first, then expand

Compliance score

0/100

AT-RISK
0/100

30-day trend

Degrading vs. prior period

  • Fairness threshold — demographic parity drift exceeded for cohort B.
  • Model status — moved from GOOD to AT-RISK after latest snapshot.
  • Documentation — Article 11 bundle pending one attachment from risk owner.

What is Guardian?

Guardian is a continuous monitoring and auditability platform for organisations operating high-risk AI systems under the EU AI Act.

It gives compliance, risk, legal, and AI teams a practical operating layer for production monitoring, incident logging, oversight records, and evidence maintenance.

Guardian is not a legal verdict or a one-click compliance claim. It helps teams maintain a shared, defensible record of what is happening around a live AI system — without reconstructing evidence under pressure.

Before and after Guardian

Before Guardian

  • Evidence is reconstructed under pressure for audits and regulator questions
  • Incidents and oversight actions live across emails, spreadsheets, and slide decks
  • Compliance, legal, risk, and ML teams work from different versions of the truth

After Guardian

  • One shared record shows what a high-risk AI system is doing in production
  • Incidents, follow-up actions, and oversight events are logged and exportable quickly
  • Teams can answer “what happened?” and “what did we do?” without scrambling

Who Guardian is for

  • Compliance teams responsible for AI Act readiness
  • Risk teams responsible for model controls and review thresholds
  • Legal teams preparing for internal and external review
  • AI and ML teams operating production systems under governance scrutiny
  • Product or operations teams managing AI-enabled workflows

What Guardian helps you monitor

  • Drift and performance changes over time
  • Fairness and demographic parity indicators
  • Data quality and input anomalies
  • Human oversight actions and review events
  • AI incidents and remediation steps
  • Documentation gaps and missing evidence
  • Audit trails and exportable evidence bundles

Guardian helps teams understand what to monitor for one real system first — then turn that into a repeatable operating model.

What becomes easier with Guardian

  • Responding to audit and regulator questions faster
  • Keeping compliance, legal, risk, and ML aligned on the same system
  • Logging incidents and follow-up actions in one place
  • Showing what changed, when it changed, and how the team responded
  • Moving from static documentation to continuous evidence maintenance

Built for high-risk AI use cases

Guardian is purpose-built to handle the strict requirements of Annex III high-risk AI systems, securing your operations where the regulatory scrutiny is highest.

More use-case notes and guides live in Resources.

One operating view for high-risk AI systems

  • See in-scope systems in one inventory: owners, context, and review state available to both technical and governance teams.

  • View drift, fairness, and data-quality signals next to open incidents and oversight actions, so the operating picture is legible in one pass.

  • Route alerts to owners and keep a living record of what changed, when, and who responded for internal and external review.

High-risk models12 active
ModelStatusIncidentsRisks
Credit v2.1AT-RISK2 open4
HR screenON TRACK01
Triage APIWATCH1 open2
On trackWatchEscalate

Illustrative — status from monitored signals and thresholds; not a legal or conformity determination.

From raw metrics to defensible monitoring records

Guardian sits on top of your models and existing monitoring inputs, turning performance and drift information into a structured operating record risk, compliance, and model owners can use. You see which systems are off baseline, which cohorts need review, and which obligations the evidence should speak to — in one place.

Signals tied to the EU AI Act

Guardian brings fairness, drift, data quality, and incident signals into one place and labels what needs attention, with explicit references to obligations such as Articles 9, 10, 11, 14, and 62. Trends and thresholds are documented so teams can show what changed and when—without treating a single headline number as a legal outcome.

Explanations in the language of supervision

When something degrades, Guardian can surface what it means in obligation terms — for example, how an event touches Article 10 (data and data governance) and Article 14 (human oversight). Risk and legal teams get incidents framed for audits and internal review, not only model metrics.

Built for EU-regulated environments

Guardian is designed for financial services, AML, HR, and public-sector style deployments under the EU AI Act and GDPR. Data stays in the EU, the focus is on metrics and records rather than unnecessary raw personal data, and documentation is exportable for boards and advisors.

The point is to know where high-risk AI is off track before a review becomes a fire drill—not to replace legal judgment or a formal conformity process where you need one.

⚠️ Fairness incident – Credit Scoring model v2.1

Explanation: The model exhibits a continuous demographic parity drop over the last 72 hours, triggering a compliance breach.

Response: Risk and model owners received an alert within minutes, paused automatic approvals for the affected cohort, and documented mitigation steps directly in Guardian.

[Art. 10][Art. 14][Art. 62]

Open Methodology

Not black-box scoring — open, defensible methodology

  • Grounded in Peer-Reviewed Methodology

    Guardian uses monitoring thresholds derived from published statistical methods and regulatory references — not opaque proprietary scoring.

  • Developed with Academic Oversight

    Our methodology is developed with Dr. OJ Akintande of DTU Compute, bringing academic rigor to fairness, drift and model-risk monitoring.

  • Mathematically Defensible Outputs

    Risk, legal and compliance teams can defend every alert and audit output through explicit metrics, thresholds and documented logic.

We do not rely on black-box compliance scoring. Guardian operationalizes academically grounded metrics and open threshold logic so risk, legal and compliance teams can defend every alert and audit output.

Nordic AI Integrity is co-founded by Thomas Noba (CEO) and Joris Cappa (COO), Copenhagen, Denmark, with academic oversight from Dr. OJ Akintande (DTU Compute).

Built for Nordic AI governance in 2026

AI Act, NIS2, GDPR, and ISO 42001 are converging simultaneously across the Nordics. Guardian helps you unify your AI inventory, risk register, and technical documentation in one platform—preventing the overhead of running four separate compliance programs.

  • EU AI Act
  • NIS2
  • GDPR
  • ISO 42001

Guardian is currently being evaluated by Nordic organisations in finance and HR for their 2026 AI Act readiness programmes.

How we work with high-risk AI teams

A steady pace for evidence work, not a one-off project

Drift checks, fairness views, and audit-oriented documentation are easier to keep current when the record lives next to production signals, instead of being rebuilt in slides before every review.

  1. 1

    Discovery

    Map your high-risk AI systems and define your exact regulatory scope.

  2. 2

    Pilot

    Connect 1–2 critical models (credit, hiring, healthcare) to measure immediate compliance gaps.

  3. 3

    Rollout

    Scale continuous monitoring and automated documentation across your entire AI estate.

Pricing

Fixed-price. No timesheets. No scope creep.

Choose your entry point: fixed-scope audit or continuous monitoring.

Integrity-4 Audit

One-time investment

Starter
1 AI system

Best for first-time AI Act readiness on 1 AI system

€9,000
StandardMost popular
3 systems

Best for organisations with multiple high-risk systems

€18,000
Enterprise
Unlimited

Best for complex organisations needing broader coverage

€28,000

Includes: 4-week delivery + 12 months support

Book Scoping Call

Guardian Monitoring

Monthly subscription

Starter
1 system

Best for 1 production system

€1,500/mo
GrowthMost popular
5 systems

Best for several high-risk models

€3,500/mo
Scale
Unlimited + API

Best for enterprise-wide AI governance

€6,000/mo

14-day free trial · No credit card required

Start Free Trial

Bundle offer: Get 3 months of Guardian free when you purchase an Integrity-4 Audit first.

Security, privacy, and audit readiness

Guardian is not legal advice and does not guarantee compliance. Nordic AI Integrity is the company behind Guardian.

Guardian is designed for regulated organisations. Every control is documented and auditable.

  • EU data residency
  • Encryption in transit and at rest
  • No customer data used to train LLMs
  • Limited prompt and output retention
  • Full audit trail for every compliance event
  • Exportable documentation for internal and external auditors

Co-founded by Thomas Noba (CEO) and Joris Cappa (COO), Nordic AI Integrity ApS · Copenhagen, Denmark.

Start with one high-risk AI system first

The fastest path to AI Act readiness is not an enterprise-wide programme. It is a focused baseline around one real system already in production.

Guardian's 4-week Readiness Sprint helps teams identify what to monitor, what evidence is missing, and what operating structure is needed to maintain a credible baseline.

  • Know exactly what to monitor for one high-risk AI system
  • Create an audit-ready evidence baseline in 4 weeks, not months of workshops
  • Give compliance, legal, risk, and ML one shared system of record from day one

See the 4-week readiness sprint

Start with one system, not a full transformation

Book a readiness call to choose one system, understand what you would monitor, and see what your first 4-week evidence baseline could look like.