In an exclusive video interview with Brando Benifei, a second term Member of the European Parliament and rapporteur for the European AI Act, we delve into the intricacies of the pioneering legislation aimed at regulating Artificial Intelligence (AI), and its implications for recruitment across the EU.
Europe’s first AI act
The anticipation had been building for some time, but Europe finally has its own AI law: the Artificial Intelligence Act. Firstly, a brief introduction: what exactly is the purpose of the law? “This regulation aims to protect fundamental rights, democracy, the rule of law, and environmental sustainability from high-risk AI,” says the EU, “while promoting innovation and giving companies the space to grow and expand. The rules impose obligations on AI based on potential risks and impact levels.”
These new rules are applied uniformly across all EU member states, following the same definition of AI. They consider AI simply: the more potential damage it can cause, the stricter the rules. There are four categories: minimal risk, high risk, unacceptable risk, and specific transparency risk.
AI tools for recruitment under ‘high risk’
Recruitment tools that utilise AI are labeled as the second category, for high risk. The regulation specifically highlights recruitment systems as risky due to the use of personal data. “Some systems determine who gains access to schools or who gets hired. Others are used in law enforcement, border control, judicial proceedings, and democratic processes. Biometric systems for identification, categorization, and emotion recognition are also considered risky.
Forefront of AI legislation
Brando Benifei has been at the forefront of the legislative process, negotiating with EU governments on what he describes as ‘the first ever horizontal legislation in the world on AI’. The initiative began nearly a decade ago, rooted in political resolutions and hearings from a special committee on Artificial Intelligence. After about two years of dedicated effort, the legislation is now on the brink of becoming enforceable law.
The legislation is not designed to curb the use of AI but to ensure its application under stringent guidelines where it most affects safety and fundamental rights.
The core aim of the European Air Act, according to Benifei, one of the European Commission’s co-rapporteurs on the first ever AI act, is to mitigate risks by establishing robust risk reduction protocols. This, he believes, will bolster trust and expand opportunities, particularly in sensitive areas like recruitment. The legislation is not designed to curb the use of AI but to ensure its application under stringent guidelines where it most affects safety and fundamental rights.
More or less prejudice than humans?
“One simple yet profound example,” Benifei points out, “is the usage of AI in curriculum selection programs that have been trained to discriminate against women. These systems, trained with biased data, can sometimes exhibit more prejudice than humans.”
This rigorous evaluation process is aimed at ensuring that recruitment and other high-risk applications of AI are as unbiased as possible.
The new regulation seeks to correct such biases by mandating conformity assessments for AI systems, which include checks on the data quality and appropriateness used for training. This rigorous evaluation process is aimed at ensuring that recruitment and other high-risk applications of AI are as unbiased as possible.
‘Recruitment can be much less biased with AI’
Benifei remains optimistic about the potential of AI in recruitment. “I’m convinced that it can be much less biased. Clearly, we cannot completely eliminate these risks. But the way we have designed our regulation, it will entail checks that will need to be performed following a standard that is now being developed. So that’s why it will take around two years before this will be fully applied because we need standards to be developed, for example, on the quality and appropriateness of data that is crucial to look at how the training is done.
Imperfect but partial application of the law
Discussing the implementation, Benifei notes that even though the law was officially passed in April 2024, it will take an additional two years to fully apply, awaiting the development of specific standards for data quality and training. In the interim, a voluntary compliance program, overseen by the European Commission and dubbed the ‘AI Pact’, will guide businesses and institutions towards compliance.
“I want to be clear that we will start a clearly imperfect the but partial application of the law. And we will do a voluntary anticipated compliance procedure for businesses and institutions before the standards develop. In this case, it is helpful to check the appropriateness and quality of data used to train the systems, including systems used to perform recruitment duties.’