
Explainable AI for systems with functional safety requirements
Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance.

Holistic framework for AI in critical network infrastructures
This document establishes the main foundations of the AI4REALNET project, in particular, the following key outcomes: - The formal specification of domain-specific use cases (UCs), replicating real-world operating scenarios involving human operator

Towards functional safety management for AI-based critical systems
The webinar provides attendees with a comprehensive understanding of the challenges and opportunities associated with integrating AI into safety-critical systems.

FutureNewsCorp, or how the AI Act changed the future of news

The State of AI Regulations
This comprehensive guide sheds light on the current state of the global AI landscape, exploring the scope of each piece of legislation, how businesses are likely to be affected, and what this means for AI governance moving forward.

Leveraging the AI Act
This presentation gives an overview and further reading for the EU AI Act – drafted in February 2021, has been extensively discussed in the EU Council and Parliament is due for final agreement in late 2023, and will be enforced by the end of 2025.