
Explainable AI for systems with functional safety requirements
Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance.

Application of the ALTAI tool to power grids, railway network and air traffic management
This document presents the responses from industry (operators of critical infrastructures) to the Assessment List for Trustworthy AI (ALTAI) questionnaire for three domains and specific use cases: power grid, railway network, and air traffic manag

Towards functional safety management for AI-based critical systems
The webinar provides attendees with a comprehensive understanding of the challenges and opportunities associated with integrating AI into safety-critical systems.

CO2A – Contrastive Conditional domain Alignment
A novel unsupervised domain adaptation approach for action recognition from videos, inspired by recent literature on contrastive learning.

Getting Hired “NYC Bias Audit” Ready
Holistic AI conducted a Bias Audit for Hired to demonstrate compliance ahead of NYC Local Law 144 (“NYC Bias Audit” legislation), which will come into effect on 1st January 2023.

SAFEXPLAIN Introduction to Trustworthy AI for Safety-Critical Systems
This introductory video provides an overview of the steps taken by the SAFEXPLAIN project to ensure that the AI-based solutions used in safety-critical systems are Trustworthy, explainable and comply with the safety guidelines of diverse industria