
Explainable AI for systems with functional safety requirements
Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance.

Holistic framework for AI in critical network infrastructures
This document establishes the main foundations of the AI4REALNET project, in particular, the following key outcomes: - The formal specification of domain-specific use cases (UCs), replicating real-world operating scenarios involving human operator

AI for Teachers
AI for Teachers is a website dedicated to supporting the integration of Artificial Intelligence knowledge throughout K-12 learning.

Position paper on AI for the operation of critical energy and mobility network infrastructures
This position paper outlines AI4REALNET’s approach to applying AI in network infrastructure operations, translating application needs into algorithmic proposals for effective human-AI collaboration in decision-making processes.

Report on meta-analysis on externalities of acceptability and trustworthiness of ADR
This Adra-e deliverable presents an analysis of the externalities surrounding acceptability and trustworthiness in ADR-supported innovative technologies.

SAFEXPLAIN Introduction to Trustworthy AI for Safety-Critical Systems
This introductory video provides an overview of the steps taken by the SAFEXPLAIN project to ensure that the AI-based solutions used in safety-critical systems are Trustworthy, explainable and comply with the safety guidelines of diverse industria