
Explainable AI for systems with functional safety requirements
Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance.

Application of the ALTAI tool to power grids, railway network and air traffic management
This document presents the responses from industry (operators of critical infrastructures) to the Assessment List for Trustworthy AI (ALTAI) questionnaire for three domains and specific use cases: power grid, railway network, and air traffic manag

Uncertainty-Based Learning of a Lightweight Model for Multimodal Emotion Recognition
In this paper, the authors propose a lightweight neural network architecture that extracts and performs the analysis of multimodal information using the same audio and visual networks across multiple temporal segments.

Towards functional safety management for AI-based critical systems
The webinar provides attendees with a comprehensive understanding of the challenges and opportunities associated with integrating AI into safety-critical systems.

An Open Dataset of Synthetic Speech
This paper introduces a multilingual, multispeaker dataset composed of synthetic and natural speech, designed to foster research and benchmarking in synthetic speech detection.

Word-Class Embeddings for Multiclass Text Classification
Code for Word-Class Embeddings (WCEs), a form of supervised embeddings especially suited for multiclass text classification.