
Explainable AI for systems with functional safety requirements
Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance.

Webinar "Industry-driven Use Cases"
AI4REALNET project covers the perspective of AI-based solutions addressing critical systems (electricity, railway, and air traffic control), modelled by networks that can be simulated and traditionally operated by humans and where AI complements a

Webinar "Distributed and Hierarchical Reinforcement Learning"
In this webinar, AI4REALNET project provides an overview of two emerging topics in Reinforcement Learning (RL): Distributed RL and Hierarchical RL.

Holistic framework for AI in critical network infrastructures
This document establishes the main foundations of the AI4REALNET project, in particular, the following key outcomes: - The formal specification of domain-specific use cases (UCs), replicating real-world operating scenarios involving human operator

Towards functional safety management for AI-based critical systems
The webinar provides attendees with a comprehensive understanding of the challenges and opportunities associated with integrating AI into safety-critical systems.

Position paper on AI for the operation of critical energy and mobility network infrastructures
This position paper outlines AI4REALNET’s approach to applying AI in network infrastructure operations, translating application needs into algorithmic proposals for effective human-AI collaboration in decision-making processes.