In 2018, a law on biometric identification entered into force in Russia. The banks are implementing biometric complexes and collecting data for placement in the Unified Biometric System (EBS). Biometric identification gives citizens the opportunity to receive banking services remotely. This saves them from queues and technically allows them to “visit the bank” at any time of the day.
The convenience of remote identification by photo or voice was appreciated not only by bank customers, but also by cybercriminals. Despite the desire of developers to make the technology safe, researchers are constantly reporting the emergence of new ways to cheat such systems.
So maybe you shouldn’t agree to the offer of a friendly operator to undergo biometric identification in a bank branch? Or do you take advantage of the new technology? We understand this post.
What is the problem?
Biometric identification has features that distinguish it from the usual username / password pair or “secure” 2FA:
- Biometric data is public. You can find photographs, video and audio recordings of almost any inhabitant of the planet Earth and use them for identification.
- It is not possible to replace a face, voice, fingerprints or retina with the same ease as a password, phone number or token for 2FA.
- Biometric identification confirms a person with a probability close to, but not equal to 100%. In other words, the system admits that a person can to some extent differ from his biometric model stored in the database.
Since not only turnstiles at airports open biometric data, but also bank safes, hackers and cybercriminals around the world are working hard on ways to trick biometric identification systems. Each year, the program of the BlackHat information security conference invariably contains reports related to biometric vulnerabilities, but there are practically no speeches on the development of protection methods.
The main problems associated with biometric identification include fraud, leakage and theft, poor quality of collected data, as well as multiple data collection of one person by different organizations.
Publications related to various methods of cheating biometric identification systems are often found in the media. This is the fingerprint of German Minister of Defense Ursula von der Leyen, made from her public photographs, and the Face ID trick on iPhone X with a mask, the sensational theft of $ 243 thousand with the help of a CEO’s fake voice, fake videos with stars touting fraudulent wins , and the Chinese ZAO program, which allows you to replace the character’s face of the video with any other.
To prevent biometric systems from taking photos and masks for people, they use the technology of detecting “liveliness” – liveness detection – a set of various checks that allow you to determine that a live person is in front of the camera, and not his mask or photograph. But this technology can be fooled.
Embedding a fake video stream in a biometric system. Source
The Biometric Authentication Under Threat: Liveness Detection Hacking report presented at BlackHat 2019 reports on successfully circumventing liveness detection in Face ID using glasses worn on a sleeping person, introducing fake audio and video streams, and other methods.
X-glasses – glasses for cheating liveness detection in Face ID. Source
For the convenience of users, Face ID is triggered if a person puts on sunglasses. At the same time, the amount of light in the eyes decreases, so the system cannot build a high-quality 3D model of the area around the eyes. For this reason, having discovered glasses, Face ID does not try to extract 3D information about the eyes and presents them in the form of an abstract model – a black area with a white dot in the center.
Data collection quality and false recognition
The accuracy of identification is highly dependent on the quality of the biometric data stored in the system. To ensure sufficient quality for reliable recognition, equipment is needed that operates in noisy and not too brightly lit bank branches.
Cheap Chinese microphones allow you to record a voice sample in adverse conditions, and budget cameras allow you to take a photo to build a biometric model. But in such a scenario, the number of false recognitions increases significantly – the likelihood that the system will take one person for another with a voice with a similar tone or similar appearance. Thus, poor-quality biometric data creates more opportunities for tricking the system that attackers can take advantage of.
Multiple biometrics collection
Some banks began introducing their own biometric system before the EBS earned. Having passed his biometrics, a person believes that he can use the new service technology in other banks, and when it turns out that this is not so, he will hand over the data again.
The situation with the presence of several parallel biometric systems creates a risk that:
- A person who has twice passed biometrics, most likely, will no longer be surprised at the proposal to repeat this procedure, and in the future he may become a victim of scammers who will collect biometrics for their criminal purposes.
- Leaks and abuse will occur more often as the number of possible data access channels increases.
Leaks and theft
It may seem that the leak or theft of biometric data is a real disaster for their owners, but, in reality, everything is not so bad.
In the general case, a biometric system does not store photographs and voice recordings, but sets of numbers characterizing a person – a biometric model. And now let's talk about this in more detail.
To build a facial model, the system finds anthropometric reference points that determine its individual characteristics. The algorithm for calculating these points differs from system to system and is the secret of developers. The minimum number of control points is 68, but in some systems their number is 200 or more.
Based on the found reference points, a descriptor is calculated – a unique set of facial characteristics, independent of hair, age and makeup. The resulting descriptor (an array of numbers) is a biometric model that is stored in the database. It is impossible to restore the original photo from the model.
To identify the user, the system builds his biometric model and compares it with the descriptor stored in the database.
There are important consequences of the principle of model building:
- Using data stolen from one biometric system to deceive another is unlikely to succeed due to different search algorithms for reference points and serious differences in the resulting model.
- It is also impossible to deceive the system with the help of data stolen from it – identification requires the presentation of a photograph or audio recording, which will already be used to build the model and compare with the standard.
Even if the database stores not only biometric models, but also the photos and audio on which they are built, it is impossible to deceive the system with their help “on the forehead”: algorithms for checking for “liveliness” consider the results to be completely false with the same descriptors.
Liveness verification methods for facial and voice modality.
Source: Speech Technology Center
Thus, the use of leaked biometric data will not help cybercriminals quickly get material benefits, which means that they are more likely to look for simpler and more reliable methods of enrichment.
How to protect yourself?
The EU Directive PSD2, which came into effect on September 14, 2019, also known as Open Banking, requires banks to implement multi-factor authentication to ensure the security of remote transactions carried out on any channel. This means the mandatory use of two of their three components:
- Knowledge – some information known only to the user, for example, a password or security question.
- Ownership – some device that is only available to the user, for example, a phone or token.
- Uniqueness – something inherent, inherent in the user and uniquely identifying the person, for example, biometric data.
These three elements must be independent so that the compromise of one element does not affect the reliability of the others.
In relation to banking practice, this means that conducting operations on biometric data must necessarily be accompanied by additional checks using a password, token or PUSH / SMS codes.
Use or not?
Biometric authentication has great prospects, but the dangers that come into our lives with them look very realistic. System developers and legislative bodies should study the results of the latest research on the vulnerabilities of biometric systems and promptly finalize both identification solutions and regulatory acts governing their work.
Banks need to take into account the situation with deepfakes and other methods of cheating biometric systems, using a combination of traditional methods of user identification with biometrics: passwords, 2FA and usb tokens can still be useful.
The situation is difficult with bank customers. On the one hand, biometric identification was developed for their convenience as an attempt to expand the possibilities for obtaining banking services at any time with minimal formalities. On the other hand, in the event of a successful attack, it is they who risk their money, and regulators and developers of biometric systems are not responsible for hacks.
In this regard, a logical recommendation to bank customers is not to rush to submit biometric data, not to pay attention to aggressive calls. If you can’t do without biometric identification, use it together with multi-factor authentication to at least partially reduce the risks.