
Explainable AI for systems with functional safety requirements
Explainable AI (XAI) is vital for making AI decision-making processes transparent and understandable to human experts, and for ensuring safety and regulatory compliance.

Holistic framework for AI in critical network infrastructures
This document establishes the main foundations of the AI4REALNET project, in particular, the following key outcomes: - The formal specification of domain-specific use cases (UCs), replicating real-world operating scenarios involving human operator

AI for Teachers
AI for Teachers is a website dedicated to supporting the integration of Artificial Intelligence knowledge throughout K-12 learning.

The Mechanics of Context-Aware Decision-Making Using AI
Blog post on AI, Cognition and Decision-making

Real-Time Context-Aware Microservice Architecture for Predictive Analytics and Smart Decision-Making
This paper aims at proposing a scalable architecture to provide real-time context-aware actions based on predictive streaming processing of data as an evolution of a previously provided event-driven service-oriented architecture which already perm

Towards functional safety management for AI-based critical systems
The webinar provides attendees with a comprehensive understanding of the challenges and opportunities associated with integrating AI into safety-critical systems.