EU wants to restrict the use of AI and facial recognition systems for the sake of privacy

EU officials plan to restrict the use of facial recognition by the police and completely ban certain types of artificial intelligence (AI) systems. This will be one of the most significant attempts to impose restrictions on the use of AI.



The ban will include a whole list of AI use cases that officials will consider "high-risk." Among them - the management of critical infrastructure of the EU countries, the admission of applicants to colleges and the processing of loan applications. The application of AI to the listings will be closely monitored. It is planned to develop specific standards for the implementation and use of AI in these cases.



The European Union is known for actively fighting for the rights of its citizens, incl. for their privacy, and in every possible way limits the excessive power of large technology companies, trying to avoid the potentially dangerous consequences of the use of new technologies. You can recall the "General Data Protection Regulation" (GDPR) - a regulation by which members of the European Union strengthen and unify the protection of personal data of all citizens, and also restrict the export of data from the EU.



Also, a couple of years ago, the EU drafted a bill "on protecting humans from AI", emphasizing that society can gain something from technologies only if there is complete trust in them, and that AI systems should be "human-centered, aimed entirely at to serve humanity and the common good and to contribute to the improvement of the conditions of human existence and its freedom. "



According to Margreta Vestager, executive vice president of the European Commission, the proposed regulation addresses threats from certain AI use cases to people and the community as a whole. She claims that the EU was the first on the planet to offer such a legal platform.



It is planned to prohibit the police from using "remote biometric identification systems" in real time, with the exception of such cases as the search for kidnapped children or the prevention of terrorist attacks, as well as the search for criminals of a certain kind.



It is also planned to oblige companies providing services related to the interaction of people with AI, clearly indicate those cases when a person encounters a bot - for example, in a chat. Likewise, creators of deepfakes and other similar media will have to make this fact unambiguous.



According to Julien Cornebiz, such a bill will not necessarily have the same implications as the GDPR, if only because it defines AI too broadly. He describes AI as a moving target and says that our phones today do things that 20 years ago could definitely be classified as "AI". Therefore, there is a risk that the new rules will either get confused in definitions or quickly become outdated.



All Articles