
Explainable AI for systems with functional safety requirements
Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance.

Towards functional safety management for AI-based critical systems
The webinar provides attendees with a comprehensive understanding of the challenges and opportunities associated with integrating AI into safety-critical systems.

Position paper on AI for the operation of critical energy and mobility network infrastructures
This position paper outlines AI4REALNET’s approach to applying AI in network infrastructure operations, translating application needs into algorithmic proposals for effective human-AI collaboration in decision-making processes.

Augmentation-free unsupervised approach for point clouds
Unsupervised learning on 3D point clouds has undergone a rapid evolution, especially thanks to data augmentation-based contrastive methods.