Lidia Bergua Sarroca
|
Compliance Officer & DPO
June 17, 2024

EU AI Act: Balancing AI-tech innovation with security and compliance

In a world where artificial intelligence (AI) is no longer the stuff of science fiction but a tangible reality, the EU Artificial Intelligence Act’s (EU AI Act) emphasis on transparency and risk management aligns well with security-by-design best practices. However, it also introduces additional layers of complexity. 

Achieving the balance between being first in the AI race by developing in record time an innovative application and ensuring compliance by maintaining robust security measures can feel like walking a tightrope: On one side, there’s the abyss of security breaches and non-compliance, and on the other, the chasm of being outperformed by your competitors. No pressure then.

What is the EU AI Act

The EU AI Act, which has been finally approved on Thursday 13 June, aims to stand as a regulatory beacon guiding AI development and deployment and seeks to create a comprehensive legal framework for AI across the EU. Its primary goals are to ensure the safety and fundamental rights of individuals and businesses, and facilitate a single market for lawful, ethical, and robust AI applications, while fostering AI innovation. 

The EU AI Act’s reach extends beyond the borders of the EU. Companies established abroad must also comply if they provide AI systems to the EU market or if their AI systems affect individuals within the EU. This means that whether you’re in San Francisco or Sydney, if your AI products are reaching the EU, you must adhere to the AI Act's extensive obligations. 

A critical aspect of the EU AI Act is the requirement for compliance across the entire supply chain. This means that it’s not just the primary providers of AI systems who must adhere to this regulation, but all parties involved (called “operators”), including those integrating General Purpose AI (GPAI) and foundation models from third-parties. If you’re integrating third-party AI technology into your applications, you are also liable for ensuring that these components meet the stringent cybersecurity and transparency standards set by the EU AI Act. In other words, you can't just blame the AI tech provider if things go awry—you'll be in the hot seat too.

EU AI Act high-risk AI systems key obligations

One of the main characteristics of the EU AI Act is its risk-based classification system, which categorizes AI systems into four risk levels—unacceptable, high-risk, limited and low risk—each with corresponding security and compliance requirements. Let's delve into the high-risk AI systems key requirements:

  1. Risk management system: One of the cornerstone obligations under the EU AI Act is the establishment of a robust risk management system throughout the AI system’s lifecycle. Operators must conduct periodic cyber risk assessments, including assessments for third-party vendors. 
  2. Accuracy, robustness, and cybersecurity: High-risk AI systems must be designed to achieve appropriate levels of accuracy, robustness, and cybersecurity.
  3. Security incident response plan: A solid security incident response policy must be established to address and manage cybersecurity incidents effectively.
  4. Technical documentation: Operators must draw up detailed technical documentation demonstrating compliance with transparency obligations.
  5. Data governance: Ensuring that data used in training, validation, and testing is relevant, representative, and free from errors is essential.
  6. Record keeping and transparency: Operators must maintain comprehensive records and provide clear instructions for use, ensuring transparency and accountability. 
  7. Human oversight and quality management: AI systems have to include human oversight to ensure proper functioning and compliance, together with a quality management system.

As shown by the stringent obligations detailed in the above list, ensuring compliance with the EU AI Act presents a multifaceted challenge for organizations, particularly those integrating complex AI technologies into their operations. In addition, it provides advantages if organizations begin baking security in during the design phase.

Our recommendations

Having understood the importance of securing AI before the act was being debated, IriusRisk got ahead of the game and developed a dedicated Security Library for threat modeling AI and ML applications

The library gives users a true view of their entire risk architecture and is available in both Enterprise and our free-forever, Community Edition. The library includes three core categories which components included within each, to effectively map out your use of AI:

  1. Data Preparation for Artificial Intelligence Modeling 
  2. Build an Artificial Intelligence model 
  3. Deploy an Artificial Intelligence model

In addition, see some of our most recent resources from thought leaders in the AI-arena, including some insightful webinar sessions with subject matter experts.