With the increasing use of biometric identification systems
personal privacy and security is decreasing. The technology that less than 20 years ago seem so advanced
that it would only appear in a James Bond movie or some futuristic display of
what life would be like in the future is now common place.
Biometric Identification systems have
created increased polarity among privacy experts. Biometric identification systems take video surveillance to
new level by facilitating the creation of individual profiles that can be entered into a database allowing biometric identification systems to recognize a profile match. This could potentially
lead to an increase in the capture of dangerous criminals, prevention of
terrorist attacks, and quicker rescue of kidnapped children, but it could also be used in ways that are unconstitutional and violate civil rights. Another concern surrounding biometric identification screening is reliability with some studies showing that they are really not that accurate. This causes concern among civil liberty groups about the government using this technology to identify criminals and terrorists.
Widespread use of biometric identification systems would mean
a significant decrease in personal privacy, by reducing what is left of our
individual expectation of privacy outside of our own homes. It would be difficult to argue that
anyone could have any expectation of privacy at all if biometric systems were
scanning, identifying, and logging our every move. This creates a problem for persons wanting to bring common
law claims for violation of privacy rights because most common
law claims are based on the "reasonable expectation of privacy" test articulated in
Katz v. United States.
It
is not only the government who seeks to use the data, private industry is
utilizing the technology in marketing and advertising as well as security. This is turning biometric identification into
a multibillion dollar industry. In response to the privacy threat of private industry engaging in biometric information
collection, storage, and utilization some states have passed legislation
regulating the use of data and requiring express consent from individuals
before a profile can be made. Facebook and shutterfly have each been sued in Illinois for using biometric data without consent, in the Facebook case; a district court judge dismissed the case based on jurisdiction without addressing the merits of the case. More recently however, a case against Shutterfly was allowed to proceed through litigation. If the case gets heard on the merits it will become a landmark case in this emerging area of the law. The results of the law suite will likely have a substantial effect on privacy claims in the future and shape how other states create biometric information protection statutes.
One of the biggest questions that arises with biometrics is what is being done with all that data. Several of the articles cited raise the point that biometric identifiers cannot be changed. If these identifiers are compromised in a data breach, the result may be more far reaching than if your credit card number or other information got stolen. Given the recent breaches and release of information from the federal government’s Office of Personnel Management and a large retailer such as Target, the eventuality of biometric data ending up in the wrongs hands is a real possibility. The provisions in Illinois’ BIPA (740 ILCS 14/15) requiring that a private entity collecting biometric data establishing a schedule for the retention and destruction of biometric information after a certain period of time might help mitigate some of this risk in terms of how much information could be obtained.
ReplyDeleteI find it interesting that the Shutterfly case is being allowed to proceed as the text of Illinois’ BIPA (740 ILCS 14/10) specifically says that “[b]iometric identifiers do not include . . . photographs.” There is probably an argument it is not the fact that Facebook and Shutterfly are using photographs, but are applying this facial recognition software to the photos that would violate BIPA.
ReplyDeleteThere is validity in the fear that information collected by facial recognition would be shared in ways that people don’t comprehend in their initial agreement. Most people when they think of privacy for Facebook they just think of the actual photo and not the meta data and facial recognition information stored? Such as in Shutterfly, the argument that biometric data does not encompass photographs or the information obtained form them. The FTC has set out best practices for companies that use facial recognition technology; but, because the recommendations to corporations would not be binding, a best practices approach would not be sufficient to protect consumer privacy.
Obviously, corporations would find value in holding onto the collected facial recognition data far beyond its “intended use” as a company can always use such information to look at consumer interaction with a product. Companies may stretch its intended use from consumer behavior to encompass long term prediction of the success of a product against other criterion; such as, population movement. Although companies will be able to tell when the surrounding population chances and their store is no longer going to be successful in the area. Thus, it seems companies would always stretch their purpose of the initial usage of the collected data. Although this is somewhat of a vague criterion one could area that this is not necessarily identifying the actual person just their consumer habits. There seems to be a difference between not necessarily expecting complete anonymity when out in public but still taking issue with the collection of your data using facial recognition to actually identify you. But this begs the question if such information can be separated so easily and grouped by consumer preference, from actual consent to implied consent and for a narrow purpose.
On the other hand, using the cameras all around to save images and build facial technology that is no used for commercial gain would be highly advantageous to law enforcement. The increase use of cameras would allow for the images and facial technology to store these images and since recognition works better when there is a data set to match it sharing data would decrease false negatives. Could the government require companies to instantly store the images onto a government central system? The more criminals learn to utilize technology the more information law enforcement will need to utilize the technology it has. One open question would be in regards to stop and frisk laws and the reasonable expectation of privacy.
To summarize my position before explaining it—I believe that facial recognition software should be limited to public safety purposes, as opposed to those directed towards generating a profit. Even “fun” apps—like having coupons pop into our phones as soon as we enter a store/restaurant/etc.—have the potential for abuse (read: stalking, or even just spam-galore). The way these apps are currently being used is an abuse. Some time ago, Facebook started automatically tagging people in my photos. I was not even impressed with the convenience—how hard was it to tag people before? (Now, there is the potential that I will accidentally tag one of those “friends” I haven’t spoken to in over ten years). Instead, I found it disturbing that employing the facial recognition software was the default. It didn’t first ask for consent from either myself or the people being identified.
ReplyDeleteFurthermore, I’m not even sure what “consent” would look like. A person consents to using facial recognition software on their phone—but then the facial recognition technology can recognize anyone, not just the user of the phone. That doesn’t sound like the kind of “consent” that we are concerned about. How would the facial recognition technology get consent of the person being identified, rather than the person using the technology?
And to be even more difficult—even if the software companies can answer that question, I still won’t be on board. No amount of security measures will satisfy me. I’ve learned firsthand how easy these measures are to get around. In 2005, my identity was stolen simply because I applied to Berkley for grad school—a university laptop containing 98,000 students’ applications was stolen from an unattended office during lunch hour. Generally everyone is satisfied with a well-meaning “oops,” so long as there were security measures in place that should have been enough—after all, what else could they have done? That may even be true in the Berkley context (although I don’t think it was, and it doesn’t help me to feel better, with collection agencies still calling me about outstanding bills coming from Miamia, FL), but in this context the answer is to not use facial recognition software in the first place.
Some applications may even sound like a great idea—for example, identifying people who come onto our premises. Who wouldn’t want to make sure that someone with a criminal record isn’t knocking on our door? The problem is that the potential for abuse of even a well-meaning application is too high. Because we cannot reign in unwanted uses, the answer has to be—go back to the days of blissful ignorance when we simply didn’t open the door for strangers if we got that “creepy” vibe. And the days of having to cut coupons, instead of having them pop into our phones as soon as we walk through the door of a business.
I won't try to draw the line of where I think the use is appropriate for public safety, or even what constitutes "public safety." There are potential for abuses even there. Suffice it to say, I am comfortable with some level of governmental uses of this technology, in the interest of public safety—perhaps, for example, the CIA using the technology to find suspected terrorists. It's probably naive of me, but I feel more comfortable with the technology in the hands of the CIA than Facebook.
[Part 1 of 2: Blogger has character limits on comments]
ReplyDeleteThe very nature of facial recognition technology is such that it will not practically allow obtaining a person’s express consent before his/her face has actually been recognized by the software, since until the question of a person’s identity is resolved, the question of express consent is not reached. Most facial recognition technology is automated and unsupervised. Consequently, the first step (identification) by the act of facial recognition makes the second step obtaining express consent prior to identification somewhat complicated. Identity is the key component, and it is not possible to exclude someone from identification without first identifying them using that exact technology.
Once a person’s face has been recognized by facial recognition technology, however, depending on the level of integration of the facial recognition software (most likely within the video surveillance software) with the business’s other software like supply chain management, customer relationship management, point-of-sale decision support software, etc., a business may be able to connect the person’s identity with their shopping preferences, etc. in many different ways. Assuming a person is identified via facial recognition and is already in the database, a business may be able to associate that person with their previous purchases, say sporting goods. The point-of-sale decision support software could then alert the right sales clerk, say someone specialized in sporting goods sales, to greet that person, or offer advice relevant to their preferences. Facial recognition is but one piece of this puzzle, while most of the power of consumer databases comes from the breadth of information they are able to obtain, the level of integration of this information, and ultimately being able to generate actionable data that can be used in subsequent interactions with the person. Consent, therefore, must be related to how businesses are able to link and use personal data, as opposed to trying to stop something like facial recognition that is one aspect of it.
Online retailers have built vast databases that profile a consumer’s habits, and can use it directly to market specific goods to a person, and also sell this information to other businesses. Identifying a person’s online habits can be done through a combination of techniques like third party cookies, browser fingerprints, etc., without the knowledge of the vast majority of users. Most of us are perfectly comfortable with Google offering “relevant ads” in Gmail, or Amazon offering online deals on items that match our recent searches. Additionally, online social media services like Facebook not only include a person’s photos (and family photos), but also have access to rich data about a person’s political affiliations, social preferences, responses to ads, etc. Additionally, Facebook already processes photos uploaded by a person to identify faces and automatically offer tag suggestions. In fact, Facebook’s tagging is insidious in that it offers suggestions to a person’s “Facebook friends” to tag them, after having automatically recognized their face, thus validating the data they collected automatically through their users. Additionally, anyone who uses Facebook/Facebook Messenger App or Google Location Services on their mobile device is broadcasting their GPS coordinates to Facebook & Google, respectively, unless they disable them. That is how smartphones offer the option of “checking in” to a given location on Facebook App, or automatically offer a restaurant menu in Google Now when you are physically at that restaurant.
[Part 2 of 2: Blogger has character limits on comments]
ReplyDeleteHowever, most users of these online services are either oblivious to the consequences of voluntarily offering personal information that allows a business like Facebook to connect a person’s face with all kinds of other data about them. I don’t see how the use of facial recognition software by brick and mortar businesses in public is that much different than the combination of automatically recognized photos + GPS location + user confirmation of location (check-in), which a majority of users are perfectly OK leaving enabled on their smartphones. Additionally, as online services expand into the brick and mortar world and vice-versa, this line will be increasingly blurred, and obtaining express consent may be even more impractical. However, preventing businesses from being able to link and perform subsequent actions using these data might be a possibility.
Finally, with regard to implied consent by businesses that use facial recognition, I suppose that posting a notice might be adequate, so that a person is warned about the consequences of entering their premises. This policy could become complicated in places like malls, where businesses could potentially have some cross-coverage with regard to video-surveillance, which when combined with facial recognition and sharing of data, might again raise the kind of Catch-22 situation of identity and consent, described in the beginning. Therefore, I do not believe that businesses should be required to obtain express or implied consent with regard to the use of facial recognition software itself because doing so may prove impractical. However, businesses could be used to require consent with regard to being able to aggregate and use data collected through various online and in-person means.
It is interesting to think about how this technology will transform an individual's expectation of privacy, especially within the context of the Katz test and how it is used in civil litigation or a criminal prosecution. For example, I think it would be problematic for purposes of the 4th amendment if law enforcement were allowed to use facial recognition software to determine the identity of a suspect. I think there may be a plausible argument that a person waives their expectation of privacy to their identity and likeness when those things are placed on a public forum like Facebook or other social media websites. While these technologies are burgeoning, it seems less likely that a person has forfeited their expectation of privacy because many people likely expect their social media profiles to be totally private. As this technology becomes more widespread, however, it seems more likely that a person’s expectation of privacy in their identity or likeness will be diminished, especially when individuals consent to the use of this technology for commercial and other marketing purposes. As a result, it seems plausible that the jurisprudence surrounding warrantless searches and privacy torts will dramatically transform as this technology becomes more commonplace and entrenched in society.
ReplyDelete