Facial scanners can be tricked | Information age

0

The precision and flexibility of facial recognition technology has allowed it to secure everything from smartphones to Australian airports, but a team of security researchers warns of potential manipulation after finding a way to trick the systems using deepfake images.

Researchers from the McAfee Advanced Threat Research (ATR) team explored ways to use ‘pattern hacking’ – also known as adversarial machine learning – to trick computer vision algorithms into intelligence artificial (AI) so that they misidentify the content of the images they see.

This approach has already been used to show how self-driving car safety systems, which can read speed limit signs and adjust the car’s speed accordingly, can be tricked into modifying traffic signs with misread stickers. by systems.

Subtle changes to the panels would be detected by computer vision algorithms, but could be indistinguishable by the human eye – an approach the McAfee team has now successfully turned to the challenge of identifying people from photos, like in passport control.

Starting with photos of two people – referred to as A and B – the ATR researchers used what they described as a “deep learning-based morphing approach” to generate a large number of composite images. combining the characteristics of both.

The images were fed into a generative adversarial network (GAN) – a pair of AI-based tools comprising a “generator” that creates new faces and a “discriminator” that assesses their likelihood – to iteratively adjust them. tiny facial cues that face recognition algorithms use to match.

McAfee face scanners can be tricked. Source: McAfee

These landmarks define the structure and shape of the face, as well as the relative position of facial features like the corner of the eye and the tip of the chin, to create digital models of a person’s face.

While the first images looked like a messy blur between the photos of people A and B, the team discovered that after a few hundred iterations, the CycleGAN system they were using would produce composite photos that a human observer would identify. like person B – but a face – the recognition system would identify person A.

Report a vulnerability

The approach has not been field tested, but laboratory tests have suggested that in an era of increasingly automated facial recognition systems – such as those used in the SmartGates currently deployed at Australian airports by the Home Office – this type of manipulation could potentially allow someone on the no-fly list to forge a photo ID that would allow them to pass an automated barrier undetected.

“As we begin to embrace these types of technologies, we need to consider the misuse of these systems,” said Raj Samani, McAfee member and chief scientist on the McAfee ATR team. Information age ahead of the research presented at this month’s Black Hat US event.

“When you start to think about the practical uses of technology – like preventing people who shouldn’t be on planes from getting on planes and known ‘bad guys’ from getting inside – applying that becomes very important. “

The technique has so far only been tested in ‘white box’ and ‘gray box’ scenarios – where operating parameters are tightly controlled by researchers and the functioning of algorithms can be closely monitored – but experiments in real “black box” scenarios would show how threatening the technique is.

Biometric systems have been a constant target for security researchers and hackers, with 3D printers and even duct tape used to trick certain fingerprint scanners and the accuracy of biometric systems under constant evaluation.

Late last year, the U.S. National Institute of Standards and Technology (NIST) released fingerprint, facial image, and optical character recognition (OCR) datasets to help designers of biometric security systems to assess their accuracy.

Earlier hacks of biometric systems led their designers to add ‘alertness detection’ features to confirm that a real person is being scanned, while newer smartphone-based facial recognition systems use depth-sensing cameras. to measure faces in 3D.

Nonetheless, Samani said, the successful hack of the McAfee team model highlights the kind of vulnerabilities that need to be continually assessed as new systems increasingly bypass human protections and push critical security decisions back to systems. of AI.

“It’s not something we’ve actively seen being exploited,” he said, “but by being open and transparent about vulnerabilities, we can explore those limitations.”

“As biometric systems are increasingly deployed, it is important to consider these attack scenarios. “


Source link

Share.

Leave A Reply