This article is also available in:
AI-powered remote surveillance technologies, such as facial recognition, have major implications for fundamental rights and freedoms like privacy. As a consequence, biometric identification of persons in remote public space, for example by video surveillance with automated facial recognition and other biometric procedures, should not be allowed to EU prosecutors.
European Parliament voted to ban police use of facial recognition technology in public places and predictive policing, a controversial practice award points to citizens on the basis of AI through ‘social scoring’ and
profiling potential criminals even before a crime is committed.
To respect “privacy and human dignity”, Members of the European Parliament said EU lawmakers should permanently ban automated recognition of people in public spaces, saying citizens should only be monitored when they are suspected of a crime.
Parliament also called for a ban on the use of private facial recognition databases – such as the controversial AI system created by US startup Clearview (also already in use by some police forces in Europe) – and said that predictive policing based on behavioral data should also be prohibited.
Politicians have raised concerns about algorithmic biases in AI and argued that human oversight and legal protections are needed to avoid discrimination. They noted that there is evidence to suggest that AI-based identification systems mistakenly identify ethnic minority groups, LGBTI + people, the elderly and women at higher rates. As a result, according to MPs, “algorithms should be transparent, traceable and sufficiently documented”, with open source options being used wherever possible, according to techcrunch.com.