The European Union is on the brink of a significant regulatory update with the impending finalization of the EU AI Act. Initially drafted in February 2021, the Act has undergone rigorous debate in the EU Council and Parliament and is set for a final agreement by late 2023, with enforcement projected for the end of 2025.

Aimed at safeguarding human rights, public safety, and the environment, the AI Act introduces broad horizontal classifications of risks associated with artificial intelligence systems. It delineates measures to mitigate the highest risks and outright bans certain AI applications. The Act will demand strict adherence to risk management, data quality, oversight, operational monitoring, and documentation, especially for high-risk scenarios. These scenarios, while generally defined, will undergo a detailed review by national Notified Bodies before any AI system can be deployed.

Producers and deployers of AI will face substantial fines for any rule infractions that result in harm to EU citizens, regardless of the AI system's location, echoing the global reach of the GDPR. With harmonized standards still under development, the EU is setting a global precedent for the regulation of AI technologies.

In light of this the Adra-e project has prepared an in dept overview of the AI Act (including further reading).

Download Now

This presentation includes:

  • Using the AI Act: A practical guide for evaluating risks and adhering to regulatory requirements.
  • AI Act: Life-Cycle for (high-risk) AI Solutions: Outlining the stages from development to deployment and decommissioning of high-risk AI, ensuring continual compliance.
  • AI Act: Duties of Providers: Detailing the responsibilities of AI system providers to meet the stringent regulations.
  • AI Act: Duties of Deployers: Specifying the obligations of entities that deploy AI solutions in real-world environments.
  • Deploying AI with Foundation or Large Language Models: Handling the specific challenges associated with deploying sophisticated AI models like GPT-4.
  • Ensuring appropriate Risk Management System: Mandating the establishment of systems to identify and mitigate risks throughout the AI system's lifecycle.
  • Ensuring appropriate Data Governance: Setting standards for data management that align with EU regulations.
  • Ensuring Transparency: Requiring clear and understandable information about AI systems for users.
  • Enabling Human oversight: Ensuring that there is a human in the loop to oversee AI decisions and interventions.
  • Enabling Accountability: Documentation & Record-keeping: Keeping detailed records to trace AI decision-making processes and actions.
  • Additional Issues: Discussing emerging issues and how they are addressed under the AI Act.
  • Leveraging the AI Act: Utilizing the regulation as a competitive advantage and for enhancing trust in AI applications.
  • References & Further reading: Providing resources and further reading for an in-depth understanding of the Act and its implications.

This comprehensive presentation reflects the depth and breadth of the EU's approach to regulating AI, emphasizing a balance between innovation and consumer protection.

The Adra-e group would especially like to acknowledge all those who contributed or were consulted during the preparation of the presentation.

AGULUCAR, ALEXEI GRINBAUM, ANDRE MEYER-VITALI, ARTHIT SURIYAWONGKUL, CHOKRI MRAIDHA, DANIEL ALONSO, EDOARDO CELESTE, EMMANUEL KAHEMBWE, FATEMEH AHMADI ZELETI, FRANCESCA PRATESI, J AHERN, LINDSAY FROST, MEERI, FERNANDO MORENO, NIKOLAOS MATRAGKAS, PAOLETTO BARATTINI, RANGANAI CHAPARADZA, RAY WALSHE, SHARON FARRELL, SILVANA MACMAHON, SONJA ZILLNER, Z AJANOVIC

 

Download Now

Leveraging the AI Act