We live in the era of targeted marketing, where our online and offline behavior and personality traits determine what products will be advertised to us. When computer algorithms have become intelligent enough to analyze a person’s personality and behavior, it is no surprise that personality traits are being used to uniquely identify people.
Analysis and matching of these behavioral traits about a person form the basis of behavioral biometrics. It is different from physiological biometrics where physical traits like fingerprints, facial or iris scans are matched to identify a person.
Behavioral biometrics analyze traits and micro-habits like voice, keystrokes when typing, navigational patterns, engagement patterns etc. A simple example of this type of authentication would be if a person were not typing as fast as they usually do, the system would fail to positively authenticate the person.
Behavioral biometric authentication methods have risen in popularity because they provide a mechanism to passively authenticate people without their knowledge. With the increasing regulatory requirement for multi-factor authentication (MFA), users may find the authentication process becoming much more tedious than simple password entry.
This is where behavioral authentication makes things easier for users. The user does not actively respond to the authentication process, but rather the authentication takes place in the background without the users knowledge.
Another factor which works in favor of this type of authentication is that the collection of data points required for authentication is dynamic. Other authentication types like passwords, PINs or fingerprints have static data or static templates stored at the point of enrolment. This data can be used by people who manage to steal them. With dynamic data points, behavioral profiles are adjusted continuously rendering any stolen data useless.
Accuracy and success of any authentication mechanism depends on how successfully they avoid both false positives and false negatives. A perfect and most secure authentication system is one with zero false positives and false negatives: unfortunately the number of false positives is usually inversely proportional to the number of false negatives.
An authentication algorithm which is designed to be strict to prevent false positives may also result in some false negatives. Similarly, an algorithm which is lenient to prevent false negatives may end up resulting in a few false positives.
With behavioral biometrics, the accuracy of algorithms which find patterns and trends between hundreds of dynamic data points is key to correct authentication. These algorithms may be based on certain assumptions with respect to the population of users. For example a voice recognition algorithm for an American call center may end up being more accurate when used on persons with American accents then when it is used on people with Scottish accents.
Similarly, assumptions may be made based on other factors like age, sex, height, location, language etc. Thus the cost of accuracy may sometimes be the inclusivity of the algorithm and vice versa. This trade-off could affect the overall security of the system.
Behavioral biometric data collection is also much more invasive to individual privacy when compared to other methods. When you register for fingerprint authentication, you are fully aware of what is happening. Behavioral profiling on the other hand is not only inherent to the technology, but mostly happens behind the scenes. You may have given consent for data collection at the point of enrolment, but you are not conscious of the exact point of time when it is happening.
Neither are you aware of what data is out there about you, who has access to it and in what way it may be used in the future. This complex yet unknown data may not always make end users feel entirely secure about the technology.
The problems with respect to security of behavioral biometrics data may be addressed by proper regulation regarding collection, storage and use of this data. Organizations collecting the data should ensure that data is protected from unauthorized use. Additionally the technology should be equally accessible, consistent and effective for all people using it. This is only possible if the system is tested on a mixed set of people to ensure that the so called ‘edge cases’ do not result in false positives or false negatives.