This text is supposed to control the risks related to the exploitation of ChatGPT. JeanLuc / stock.adobe.com
Brussels is calling for, among other things, limitations on facial recognition systems in public places.
MEPs approved on Wednesday a European project for the regulation of artificial intelligence (AI), paving the way for negotiations with the Member States to finalize this text which should limit the risks of ChatGPT-type systems. “These are historic moments, because it is a world first“, reacted European Commissioner Margrethe Vestager, who carried the text with her colleague Thierry Breton. However, the legislation will not come into force before 2026. The European Parliament has called for new bans, such as that of automatic facial recognition systems in public places. The Commission would like to authorize its use by law enforcement agencies in the fight against crime and terrorism. The subject should feed the debates with the Member States which refuse the prohibition of this controversial technology.
The European Union hopes to conclude before the end of the year the first regulation in the world aimed at regulating and protecting innovation in artificial intelligence, a strategic sector in economic competition. Brussels proposed an ambitious project two years ago, the examination of which has been further delayed in recent months by controversies over the dangers of generative AI capable of creating texts or images. The European Parliament adopted its position on Wednesday in a plenary vote in Strasbourg. At the end of the day, negotiations must begin with the Member States to find a final agreement. Thierry Breton called for concluding the process in “the next months“. “AI raises many questions – socially, ethically and economically. (…) It is a question of acting quickly and taking responsibility“, he said on Wednesday.
Read alsoChina: facial recognition by artificial intelligence at the heart of an exhibition
“Act fast”
Believing that there was urgency when the measures will not take effect before 2026, Thierry Breton and Margrethe Vestager announced their intention to obtain voluntary commitments from companies as quickly as possible. Of great technical complexity, artificial intelligence systems fascinate as much as they worry. While they can save lives by enabling a quantum leap in medical diagnosis, they are also exploited by authoritarian regimes to exercise mass surveillance of citizens. The general public discovered their immense potential at the end of last year with the release of the ChatGPT editorial content generator from the Californian company OpenAI, which can write essays, poems or translations in seconds.
Example of the possible feats: an unreleased Beatles song recorded using AI to recreate the voice of John Lennon will be released this year. But the dissemination on social networks of false images, more real than life, created from applications like Midjourney, has alerted to the risks of manipulation of opinion and the dangers for democracy. Scientists have called for a moratorium on the development of the most powerful systems, until they are better regulated by law.
‘Excessive rules’
Parliament’s position broadly confirms the Commission’s approach. The text draws on existing regulations on product safety and will impose checks based primarily on businesses. The heart of the project consists of a list of rules imposed only on applications judged to be “high risk“. These would be systems used in sensitive areas such as critical infrastructure, education, human resources, law enforcement or migration management. Among the obligations: provide for human control over the machine, the establishment of technical documentation, or even the establishment of a risk management system. Compliance with them will be monitored by supervisory authorities in each member country.
The European Parliament intends to better take into account generative AIs of the ChatGPT type by calling for a specific regime of obligations which essentially repeat those provided for high-risk systems. The Commission proposal already provides a framework for AI systems that interact with humans. It will thus oblige them to inform the user that he is in contact with a machine and will force the applications generating images to specify that they were created artificially. An obligation that will probably be extended to texts. Bans will be rare. They will concern applications contrary to European values such as the citizen rating systems or mass surveillance used in China.
The CCIA, the lobby representing the interests of the information and communications technology industries, warned that some changes made by MEPs risked “slowing down innovation by dragging down AI developers in Europe under the weight of excessive rules“. OpenAI has already warned that it could be forced out of the EU depending on the tenor of the legislation.