The EU AI Act: What It Means for Alteryx and Our Customers

What's New   |   Tommy Ross   |   Jun 6, 2024 TIME TO READ: 6 MINS
TIME TO READ: 6 MINS

On May 21, the European Council gave the final approval required to enact the EU AI Act, the world’s first major comprehensive regulation of Artificial Intelligence systems. It is sure to have an outsize impact on the AI ecosystem, from model development to the deployment and use of tailored AI-driven applications. In this blog post, we’ll look at what’s in the Act and how Alteryx is preparing to ensure that we and our customers can operate safely and confidently in alignment with its regulations.

What’s in the EU AI Act?

According to the EU Commission, the goal of the AI Act is “to foster trustworthy AI in Europe and beyond, by ensuring that AI systems respect fundamental rights, safety, and ethical principles and by addressing risks of very powerful and impactful AI models.” To do that, the legislation establishes two conceptual frameworks: one based on risk, and one based on capability. It also directs the establishment of infrastructure across the EU to monitor AI activities and compliance.

Risk-Based Framework: The risk-based framework evaluates AI systems based on their intended uses. It defines four categories of use:

  • Prohibited uses are specified uses that are not allowed for AI systems on the EU market under any circumstances. They include the use of AI to implement social credit scoring systems, real-time biometric surveillance, or purposeful manipulation or deceit of users.
  • High-risk uses are defined in an Annex. The current annex primarily focuses on automated decision-making employed in areas where bias could directly harm the rights or opportunities of individuals, such as AI systems used in decisions about hiring or promotions, benefits eligibility, or law enforcement. Developers of these systems must comply with a range of measures designed to minimize risk, including employing risk management and data governance measures, conducting robust testing, and ensuring both quality and bias correction in training data.
  • Medium- and low-risk uses capture all use cases not covered by the other two categories. It is assumed that the vast majority of AI systems will be considered medium- and low-risk uses. Specific definitions are not included in the legislation, and there are no regulatory obligations applying specifically to medium- and low-risk uses. However, all developers of all AI systems – no matter how they are used – will be required to comply with existing, relevant EU laws addressing concerns like copyright protection, privacy, and cybersecurity.

Capability-Based Framework: The AI Act sets out a second regulatory framework based on the capabilities, specifically, of foundation models or “general-purpose AI systems” (GPAI). This framework has two categories, with corresponding compliance obligations:

  • All GPAI systems – which include all generative AI systems and foundation models – are subject to a set of baseline requirements. Users must be informed when they are interacting with a GPAI system. Content produced by a GPAI system must be digitally marked to identify it as AI output. GPAI developers must provide transparency documentation, such as information about training data and testing, that enables downstream integrators and deployers to make smart, safe choices when adapting these systems.
  • Systemic risk models are GPAI systems determined by the European Commission to have the potential to create an unusually significant impact on the EU because of their capabilities or safety concerns. As a standard, the legislation creates a presumption of systemic risk for a model “when the cumulative amount of compute used for its training measured in floating point operations (FLOPs) is greater than 1025.”

To put this double regulatory framework into practice, the AI Act directs the establishment of EU-wide governance infrastructure. It mandated the establishment of an EU AI Office, which was launched earlier this year. It requires that each member nation identify a market surveillance authority that will monitor AI activities and approve the deployment of high-risk AI systems and a notifying authority that will approve the establishment of conformity assessment bodies (essentially AI system auditors) needed to evaluate high-risk models.

Finally, the Act sets out the mechanics of compliance. The Act requires companies deploying high-risk uses to complete third-party conformity assessments measuring compliance with the obligations of the law, otherwise, third-party evaluations are not required Providers of GPAI models who do not comply with regulatory obligations, or non-compliance with other obligations within the Act, can face fines up to 3% of total worldwide annual turnover or 15 million EUR, whichever is higher. Separately, non-compliance with restrictions on prohibited uses can generate fines the higher of 7% of annual turnover or 35 million EUR.

The Act will go into effect in several stages, offering AI businesses time to build a compliance program:

  • Regulations on prohibited uses will go into effect 6 months after enactment.
  • Rules for GPAI models with systemic risk will be applicable 9 months after enactment.
  • Rules for GPAI models that do not pose systemic risk will enter into effect 12 months after enactment.
  • Generally, all other regulations will enter into force 24 months after enactment.

So, what does this mean for Alteryx and our customers?

The timeline established in the AI Act provides Alteryx and our customers ample time to evaluate and apply these regulations to Alteryx uses. That said, Alteryx has already begun implementing a range of measures to ensure that we, and our customers, can be confident in our compliance with this landmark legislation.

For us, it begins with our Responsible AI Principles. Our principles guide all of our activities, from the acquisition of AI systems for internal use to the integration of foundation models to make our AI Platform for Enterprise Analytics more effective, efficient, and user-friendly. We are implementing our principles through a combination of policies, processes, and technical controls to ensure that our AI work stands on a strong ethical foundation. Importantly, our principles are closely aligned with the AI Act and the EU’s Ethics Guidelines for Trustworthy AI.

More specifically, we have designed robust internal AI governance procedures to ensure that we are identifying and mitigating risks, complying with relevant laws, and meeting customer requirements. For example, every AI deployment – whether internally or through our analytics platform – must pass through an AI Risk Assessment, which is explicitly designed in alignment with the EU AI Act. Tools like this help us meet compliance obligations – and, more importantly, ensure that we are acquiring, developing, integrating, and deploying AI safely and responsibly.

Alteryx’s platform does not incorporate AI models that are categorized as “systemic risk” models, nor have we designed our platform for prohibited or high-risk uses under the EU AI Act. As such, we expect our compliance obligations to be minimal, and we will be in a position to fully satisfy those obligations when they enter into effect over the course of the next two years. Moreover, in cases where customers wish to use Alteryx’s AI Platform for Enterprise Analytics for high-risk uses, we will be ready to provide those customers with relevant information t to help them comply with any regulatory obligations they may accrue.

In short, because Alteryx is committed to responsible AI, our customers can operate with confidence in our alignment with the EU AI Act and our determination to work with our customers to harness the incredible potential of AI safely and responsibly.