
Explainable AI for systems with functional safety requirements
Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance.
Structured Power Grid Simulation Dataset for Machine Learning: Failure and Survival Events in Grid2Op's L2RPN WCCI 2022 Environment
This dataset was developed for and used in the paper titled "Fault Detection for Agents in Power Grid Topology Optimization: A Comprehensive Analysis" by Malte Lehna, Mohamed Hassouna, Dmitry Degtyar, Sven Tomforde, and Christoph Scholz,

Webinar "Industry-driven Use Cases"
AI4REALNET project covers the perspective of AI-based solutions addressing critical systems (electricity, railway, and air traffic control), modelled by networks that can be simulated and traditionally operated by humans and where AI complements a

Webinar "Distributed and Hierarchical Reinforcement Learning"
In this webinar, AI4REALNET project provides an overview of two emerging topics in Reinforcement Learning (RL): Distributed RL and Hierarchical RL.

Application of the ALTAI tool to power grids, railway network and air traffic management
This document presents the responses from industry (operators of critical infrastructures) to the Assessment List for Trustworthy AI (ALTAI) questionnaire for three domains and specific use cases: power grid, railway network, and air traffic manag

Holistic framework for AI in critical network infrastructures
This document establishes the main foundations of the AI4REALNET project, in particular, the following key outcomes: - The formal specification of domain-specific use cases (UCs), replicating real-world operating scenarios involving human operator