Data Privacy and Biometrics

[Posted on Behalf of Steve King,  Director, Cybersecurity Advisory Services at Information Security Media Group (ISMG) ]

Biometrics, while an element of data security, is a unique attribute that should be treated in an extraordinary fashion.

Passwords and MFA data are useful to attackers, but facial, retina and fingerprint scans open a whole new world of threat.

DNA is also coming soon.

It is obvious why these technologies are preferred from a consumer point of view as they make identification so much more convenient and “efficient”.

But biometric data compared with conventional security profiles is so much more easily hacked. Unsecured IoT devices which are doing the collecting, storing and transmitting underscore the vulnerabilities. Because biometric data is static and can never be changed (unlike passwords), complete identity theft is now not only possible but will soon begin to soar on the threat matrix. Additionally, biometric data like facial scans is stored in databases which are notoriously easy to hack.

Threat vectors will range from financial theft to life threatening extortion schemes. After all, you may no longer be you, but someone with a laptop down in their parent’s basement instead.

Amazon, Google, Microsoft, and other smaller technology companies are in the process of developing facial recognition technology to “read” emotions and infer how people feel based on facial analysis, raising new and complex sets of privacy concerns. Current facial recognition technologies are also much less accurate in identifying people of color, creating an expanded risk of misidentification of minorities.

While Delta airlines provides an arguably better customer experience for check-in by providing optional facial recognition, the gap between how current technology deals with biometrics and the evolution of future improvements especially in edge computing and storage and transmission protocols is wide.

No alt text provided for this image
Risk examples abound.

Masks and false faces have proven to deceive even advanced facial recognition systems. We have seen machine learning create a fingerprint that combined the characteristics of the majority of fingerprints into one fake master print, known as the DeepMasterPrint (also a good name for a rapper). When applied, this master fingerprint has successfully logged into multiple device types (smartphone, tablet, home security systems) using only a single authentication protocol.

In addition, there are many questions around IoT security generally, including biometric data specifically in transit and issues that go to the demonstrated assurance that encryption keys can be protected adequately where they haven’t been in other domains ... like in the AWS server-side decryption access through over provisioning access in the Cap-1 breach, for example.

From the legal quarter, there are serious questions as well.

State legislators have added facial template data into the types of protected “personal information” which, if compromised, will trigger breach notification obligations on the part of impacted entities. There are questions about whether the vendors and custodians have a larger obligation to make sure that the lay public understands the full implications of offering their biometric data in exchange for convenience.

Is this truly caveat emptor?

The newly enacted CCPA now includes facial template data (and other forms of biometric data) within their definitions of “personal information.” The CCPA also requires covered entities to provide notice to consumers as to how facial template data is used and provides a private right of action if facial template data is involved in certain data breach events.

Under the Illinois’ Biometric Information Privacy Act (“BIPA”), which is considered the most stringent of all the state laws, a private entity cannot collect or store biometric data without first providing notice, obtain written consent, and provide certain disclosures. BIPA also contains a private right of action provision that permits the recovery of statutory damages ranging between $1,000 and $5,000 by any “aggrieved” person under the law, which has generated a tremendous amount of class litigation from consumers alleging mere technical violations.

The recently emerged NIST Privacy Framework, which is an attempt at providing a simple context in which companies can identify and manage privacy risk to both protect customer data and come into compliance with state data privacy regulations is a solid first step in providing the building blocks that can help organizations in achieving privacy goals.

No alt text provided for this image
At the same time, it naturally institutionalizes a de facto industry standard for data privacy protection that will likely give rise to lots of downstream litigation chasing future compromise incidents and breaches.

The question that will keep lawyers’ billing mechanisms in full gear is “does the presence of the new NIST Framework establish a minimum threshold standard of care against which future lawsuits and negligence claims will be based?” In fact, a recent surge in biometric privacy lawsuits have caused law firms to create specialty groups and hire attorneys solely to address biometric privacy litigation.

After all, NYDFS Shield, CCPA and GDPR all specify strident protocols for data privacy with significant penalty structures for violations, including failure to notify and failure to take reasonable precautions to protect their customers’ privacy.

An actual data breach is not in fact necessary.

Can the NIST Framework be used as the standard against which reasonable precautions must be measured? And will the Framework grandfather earlier versions of PII as specified in NIST 800-122 or call out biometric data as a separate category of PII to be treated differently than fungible data like userids and passwords?

And if so, what are the implications for all covered entities?

E-mail me when people leave their comments –

You need to be a member of CISO Platform to add comments!

Join CISO Platform