
Explainable AI for systems with functional safety requirements
Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance.

Application of the ALTAI tool to power grids, railway network and air traffic management
This document presents the responses from industry (operators of critical infrastructures) to the Assessment List for Trustworthy AI (ALTAI) questionnaire for three domains and specific use cases: power grid, railway network, and air traffic manag

Holistic framework for AI in critical network infrastructures
This document establishes the main foundations of the AI4REALNET project, in particular, the following key outcomes: - The formal specification of domain-specific use cases (UCs), replicating real-world operating scenarios involving human operator

Uncertainty-Based Learning of a Lightweight Model for Multimodal Emotion Recognition
In this paper, the authors propose a lightweight neural network architecture that extracts and performs the analysis of multimodal information using the same audio and visual networks across multiple temporal segments.

Towards functional safety management for AI-based critical systems
The webinar provides attendees with a comprehensive understanding of the challenges and opportunities associated with integrating AI into safety-critical systems.



Gesture Recognition
This lecture overviews Gesture Recognition that has many applications in Human-Machine Interaction (HMI), Human-Robot Interaction (HRI), Sign Language, Navigation or/and the manipulation in VR environment and Distance learning.