Rights group warns of potential abuse as EU reaches deal on AI Act

By David Coffey - RFI
FEB 6, 2024 LISTEN

The European Union's 27 member states have reached a deal on the bloc's Artificial Intelligence (AI) Act. However, rights group Amnesty International has raised concerns over the export of the technology to some countries.

Amnesty International has been severely critical of what it calls double standards within the EU, accusing lawmakers of allowing the export of technologies to countries they say are openly violating human rights.

Digital surveillance

According to Mher Hakobyan, advocacy advisor on AI at Amnesty International, "Digital surveillance systems produced by French, Swedish and Dutch companies, have been used in China's mass surveillance programmes against Uighurs and other Muslim-majority ethnic groups on its territory.

"Similarly, cameras manufactured by a Dutch company have been used by the police in occupied East Jerusalem to maintain Israel's system of apartheid against Palestinians," he says.

Despite Brussels' move to regulate the development of AI and the impact it will have on society, Amnesty believes EU lawmakers must align their actions with a commitment to fundamental rights, pointing out instances of European technology contributing to human rights abuses globally.

"This is not only the shortcoming of the European Parliament, but of EU legislators in general," he adds.

"In the European Parliament's position that was published in June last year, there was a provision that suggested putting a stop to the export of any technologies that will be prohibited in the EU. Unfortunately, it didn't get through to the final text that was agreed."

Amnesty International's criticism is not only directed at the European Parliament for letting go of this stance, but also at member states that actively pushed to have it removed.

In response, the EU says that the AI Act aims to regulate artificial intelligence while allowing European tech companies to develop homegrown talent in the sector.

French position

Following months of opposition to the bill, France finally gave its backing to the regulation on Friday.

EU policymakers announced they had found a final compromise on the AI Act's content in December.

At the time it was hailed as a pioneering step amidst the spread of AI tools like as OpenAI's ChatGPT and Google's Bard. 

However, the agreement was not welcomed everywhere. Over the past few weeks, Germany and France had indicated that they might oppose the text in any vote on the issue. 

France had been pushing for self-regulation rather than legislation. This, France argued, would allow EU companies more freedom to develop native AI technology inside Europe.

French AI start-up Mistral, founded by former Meta and Google AI researchers, and Germany's Aleph Alpha have been actively lobbying their respective governments about the technology.

The deal that was finalised over the weekend is a major stepping stone towards regulation.

The next step is a vote by a key committee of EU lawmakers on 13 February followed by ratification by the European Parliament in March or April if the final text is ultimately approved.

To date, however, the European Parliament has rejected banning the export of AI systems, despite concerns over potential human rights violations.

'Oppenheimer conundrum'

The technologies under scrutiny include facial and emotional recognition software, predictive power policing and social rating, which has taken hold in some Chinese cities. 

The question is whether it is better to enable European countries to develop AI platforms and export them to authoritarian regimes, or whether authoritarian regimes should depend on home-built AI technologies with all the problems that poses.

The conundrum is not dissimilar to J.Robert Oppenheimer's race to build an atomic bomb before the Nazis could produce nuclear weapons, Hakobyan says.

"It would be, if the technologies produced in Europe...follow[ed] the same standards when a company wants to put them on the European market...

"[However] the same company producing the same system for the purpose of export [currently] does not need to go through the any of these safeguards or transparency measures to sell it abroad," Hakobyan says.

"It would be beneficial if at least the legal and technical standards were followed, but they don't have to do that [at the moment]".

Hakobyan points out that in Europe it is possible to develop problematic or flawed AI models, but there are procedural and legal standards around AI development that safeguards against human rights infringement or discrimination.

"But that might not exist in the country that you're exporting the technology to," he concludes.

"Essentially, you're just benefiting from potential human rights abuse ... and that is something that Europe can't afford, if it wants to be a credible voice".