Data is Needed on Facial Recognition Accuracy

This post is inspired by an article titled “Over fakes, Facebook’s still seeing double” by Drew Harrell in the 5 May 2018 issue of the Washington Post. In December Facebook offered a solution of its worsening coverage of fake accounts: new facial-recognition technology to spot when a phony profile tries to use someone else’s photo. The company is now encouraging its users to agree to expand use of their facial data, saying they won’t be protected from imposters without it. The Post article notes that Katie Greenmail and other Facebook users who consented to that technology in recent months have been plagued by a horde of identity thieves.

After the Post presented Facebook with a list of numerous fake accounts, the company revealed that its system is much less effective than previously advertised: The tool looks only for imposters within a user’s circle of friends and friends of friend’s of friend;s—not the site’s 2 billion-user network, where the vast majority of doppelgänger accounts are probably born.

Before any entity uses facial recognition software, they should be compelled to test the software and describe in detail the sample it was developed on including the size and composition of that sample, and the performance of the software with respect to correct identifications, incorrect identifications, and no classifications. Facebook needed to do this testing and present the results. And Facebook users needed to demand these results from testing before using face recognition. How many time do users need to be burned by Facebook before they terminate interactions with the application?

The way facial recognition is used on police shows on television seems like magic. A photo is taken at night with a cellphone and is tested against a data base that yields the identity of the individual and his criminal record. These systems seem to act with perfection. HM has yet to see a show in which someone in a database is incorrectly identified, and that individual arrested by the police, interrogated and charged. That must happen. But how often and under what circumstances? It seems likely that someone with a criminal record is likely to be in the database and it is possible that the individual whose photo was taken is not in the database. If there is no match will the system make the best match that it can and make a person who is in the database a suspect in the crime?

The public, and especially defense lawyers, need to have quality data on how well these recognition systems perform.

© Douglas Griffith and healthymemory.wordpress.com, 2018. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to Douglas Griffith and healthymemory.wordpress.com with appropriate and specific direction to the original content.

Tags: , ,

Leave a comment