Press "Enter" to skip to content

Your Facebook selfies could end up in a police surveillance database

By

When you post a picture to Facebook, it’s not just your friends and grandma who can see it — police are increasingly using social media to identify suspects through controversial facial recognition technology.

Anyone with a driver’s license is already included in databases used by police agencies to identify potential suspects. Social media pictures are also being fed to facial recognition software, bringing the debate over surveillance technology into new territory. Anyone who has their privacy settings set to “public” should expect law enforcement to have access to their profile.

If you post a photo to Facebook and make it available to the public at large, you should assume it’s been grabbed by many people,” said Eric Goldman, co-director of the High Tech Law Institute at Santa Clara University.

The expansion of police surveillance is raising serious concerns about its impact on online privacy and marginalized communities. It’s also driven bipartisan support for more regulation at the federal level, at a time where few laws are on the books governing how this technology can be used.
Essentially, facial recognition tech works like this: Photos uploaded by investigators are studied by an algorithm that builds a unique map of a person’s face, which is compared against templates of other faces contained in a database. Ideally, the process creates a digital lineup that investigators use to identify a suspect.
Facial recognition technology

A screenshot of a report from the U.S. Government Accountability Office shows an overview of how a typical facial recognition search works. (Screenshot | U.S. Government Accountability Office)

Facial recognition technology remains controversial despite its embrace by police across the country. Imperfect algorithms are known to sometimes misidentify people with darker skin, which led to at least two incidents in Michigan where a person was wrongfully arrested. Civil rights groups worry the technology threatens online privacy and discriminates against minority communities.

“Facial recognition technology can be extremely helpful, and it can also be extremely problematic,” Goldman said. “In general, I don’t think we want to portray facial recognition technology as all bad all the time. I think we’d be losing something in that process. We also have to be cognizant of how easily misused it is.”
Regardless, private companies are making a lucrative business out of selling access to their own databases, which sometimes contain images scraped from social media sites without users’ knowledge. The facial recognition industry is projected to be worth $7 billion by 2024.

Turns out your selfies are worth something after all. Algorithms designed to hunt for human faces have been trained using the average person’s online pictures for close to a decade. Consumers also use facial recognition technology to unlock smartphones and tag friends in photos posted to social media.

Big tech and law enforcement
Clearview AI is a New York company that sells access to its vast database of 3 billion images mined from public websites like Facebook, Instagram, Twitter and LinkedIn. Scraping photos from the internet is in a murky legal area, Goldman said.
FacebookLinkedIn, Google, and Twitter sent Clearview cease-and-desist letters alleging that the company violated their terms of service by scraping user data. Clearview AI was deemed illegal in Canada and is facing several legal complaints in the U.S. and Europe.
Clearview still maintains an extensive list of police clients. At least 10 federal agencies use Clearview’s technology, according to a government watchdog report. The company also obliged the Michigan State Police and scores of local police departments with free trial periods.

Critics argue that collecting the faces of millions of Americans, including many who didn’t commit a crime, creates a “perpetual lineup.” However, law enforcement groups say facial recognition technology is a valuable crime-fighting tool when used correctly.

How would a ban on facial recognition impact law enforcement? An MSP fact sheet provides one answer from the police perspective:
“If law enforcement lost the ability to use FR, facial examiners would be forced to analyze images manually, resulting in lengthy, inefficient, and costly investigations. The resulting investigative delays would put the public at a greater risk of victimization.”
Shobita Parthasarathy, director of the University of Michigan’s Science, Technology and Public Policy program, said facial recognition technology is assumed to be less biased than a human being. But she points to a growing body of research showing the tools make mistakes when identifying people of color, children, women, gender non-conforming people and disabled people.

“We’re only really hearing about cases of misidentification that are reported because there are threats of lawsuits,” she said. “I think it’s probably happening much more frequently.”

Michigan has already experienced several examples of Black people being misidentified, resulting in wrongful arrests and legal action.
In 2019, a Detroit man was wrongfully charged with felony larceny after his face was incorrectly matched to a person in a cellphone video uploaded by police. Last year, a Farmington Hills man was wrongfully arrested after he was mistakenly matched to video surveillance images from a retail store. A 14-year-old girl was kicked out of a Livonia roller rink in July after being misidentified by facial recognition software.
Robert Williams, the Farmington Hills man wrongfully arrested for shoplifting, testified before a U.S. House panel on police facial recognition technology. The American Civil Liberties Union filed a lawsuit against the Detroit Police Department on his behalf.

“I never thought I would be a cautionary tale,” Williams told members of Congress in July. “More than that, I never thought I’d have to explain to my daughters why their Daddy got arrested in front of them on our front lawn. How does one explain to two little girls that a computer got it wrong, but the police listened to it anyway and that meant they could arrest me for a crime I didn’t commit?”

Parthasarathy warns that facial recognition tools could worsen systemic problems with policing and surveillance in minority communities. Research on “forensic confirmation bias” suggests people tend to focus on evidence that confirms their expectations, while ignoring information that does not, in the face of uncertainty.
“I think it’s harder for us cognitively to understand that the technology reflects human biases,” she said. “The technologies are often sold as being more objective. We scrutinize these kinds of technologies even less because we assume that they’re not going to be biased.”

While companies highlight the ability of facial recognition tools to aid police investigations, Goldman said there’s less transparency about whether the software will create racially biased outcomes.

“In practice, we suspect that every facial recognition database encodes racial bias,” he said. “We have substantial evidence that law enforcement doesn’t do standard investigatory practices if they have access to facial recognition technology. They use it to cut short their efforts, and if the search results they got were racially biased, then all the remaining law enforcement efforts they make might be encoding or predicated on that racial bias.”

How it’s used in Michigan

Facial recognition technology isn’t new to Michigan policing. Since 2001, the Michigan State Police has maintained a database that local police can access. It has grown to 55 million images as of Aug. 1, and 2 million images were added in the last year.

The MSP database is made up of driver’s licenses, mug shots, ID photos and other images created by the Department of Corrections and Department of State. It does not include social media images, according to a spokesperson. However, police can input social media pictures to search for matching faces in the MSP database.

The Detroit Police Department also uses public records, social media images and a network of surveillance cameras spread across the city to identify possible suspects with facial recognition technology.
Detroit Project Greenlight

A screenshot from a map of all Project Green Light partners in Detroit shows where real-time surveillance cameras are installed across the city as of Aug. 3, 2021. (Screenshot | City of Detroit)

DPD did not respond to questions about how social media images are used and whether social media photos are retained in its database. DPD also did not answer questions about the size of its database or whether its used Clearview AI tools in criminal investigations.

A July report to Detroit’s Board of Police Commissioners showed social media photos were used in 25 investigations since the beginning of 2021. That accounts for 35% of facial recognition investigations this year, according to the most recent data.
Facial recognition technology created a possible match in 62% of all cases, meaning the images didn’t identify a suspect one-third of the time.
Faces matched by the computer are not supposed to be considered a positive identification under policies set by MSP and Detroit police. Any connections to possible suspects must be determined through further investigation.

In 2019, the Detroit Board of Police Commissioners required facial recognition technology to only be used to investigate violent crimes or home invasions in response to public opposition.

Protesters are demanding the city of Detroit stop using facial recognition software altogether. The city and DPD stand by its use, arguing the benefits outweigh the risks.

Willie Bell, a retired Detroit police officer and member of the city’s police oversight commission, said “no system is perfect” but facial recognition technology has proven useful. Bell said the tool, when used correctly, can speed up investigations and solve crimes.

“We hear the concerns of the community, but we think this is a necessary tool in this day and age of fighting crime,” Bell said. “This is something that, so far, is working for us.”
Detroit started using facial recognition technology in 2017. Last year, the city extended its contract with DataWorks Plus, a South Carolina–based company that contracts with police in other major cities like Chicago, New York, Los Angeles and San Francisco.

Under the original contract with Detroit, DataWorks offered the ability to search faces in real-time through video surveillance feeds. DPD amended its internal policy in 2019 to end the use of facial recognition on live feeds after pushback from residents and activists. DPD policy also prohibits facial recognition from being used to assess a person’s immigration status.

DataWorks did not respond to a request for comment.
A recent report from the U.S. Government Accountability Office found 20 federal agencies own or used a system with facial recognition technology. It also found more than a dozen federal agencies didn’t know the full scope of their facial recognition efforts and lacked effective means to track the technology’s use.
Several federal agencies that used Clearview tools operate extensively in Michigan, including U.S. Customs and Border Protection, Immigration and Customs Enforcement and the FBI.

The Michigan State Police was given a six-month trial version of Clearview tools in 2019 but decided not to pursue a contract with the company, according to a spokesperson.

Six federal agencies reported using images from protests and civil uprisings sparked by the death of George Floyd in May 2020. Three agencies reported using it on images taken from the U.S. Capitol riot on Jan. 6.
Facebook posts provide a treasure trove of digital evidence for investigators trying to identify people who broke the law on Jan. 6. U.S. Capitol Police also used Clearview AI to help generate leads and federal agents obtained internal records from social media platforms to identify suspects.
Clearview AI doesn’t disclose its customers, though a BuzzFeed News report found 1,803 publicly funded agencies used or tested the policing tool before February 2020.
The BuzzFeed report, based on internal Clearview data, suggests 30 Michigan law enforcement agencies used the tool. Police departments in Grosse Pointe and St. Clair Shores also said they used free trials of Clearview AI.

“We know that law enforcement is a huge client for ClearView AI,” Parthasarathy said. “They also have private clients. We don’t always know whether a law enforcement agency is working with them.”

There are all kinds of other places where facial recognition technology is being used. Dozens of Michigan police departments partnered with Amazon’s Ring doorbell service to obtain footage recorded by private cameras. Homeowners can decline requests to view footage, however.

Parthasarathy was among a team of U-M researchers who authored a report on facial recognition technology in schools. The report recommended a five-year ban on using the technology in school districts until data privacy laws and regulatory policies are created.

Wild West of technology
So far, facial recognition technology has largely evaded regulation by state and federal lawmakers.
In the last year, IBM, Amazon and Microsoft announced the end of sales of their facial recognition technology to police until Congress creates federal law regulating its use.
U.S. Rep. Rashida Tlaib, D-Detroit, sponsored a bill that would prohibit federal agencies from using biometric surveillance and withhold funding from state and local police departments that use it.
Several bills aimed at regulating facial recognition technology have been introduced in Michigan, but none have moved forward. Two bills in 2019 sought to create a five-year moratorium on the technology and outlaw scanning of live video feeds.

Another bill introduced in June by state Rep. Karen Whitsett, D-Detroit, seeks to prevent police from using facial recognition programs with recordings made by body cameras unless authorized by a warrant.

Several states have passed data privacy laws to regulate the collection of biometric data. The ACLU sued Clearview in Illinois, arguing the company violated the state’s privacy laws.
Members of the Ann Arbor City Council discussed banning police from using facial recognition technology earlier this year.
The Michigan State Police is part of an international scientific working group exploring facial recognition technology. Several “success stories” of its use in Michigan were reported to hearings held by the U.S. Department of Justice and Presidential Commission on Law Enforcement and the Administration of Justice in 2020.
In one example, an unnamed local police agency in Michigan used a social media photo to identify a homicide suspect, who was later convicted. Another example described how a federal law enforcement agency submitted a social media photo to MSP’s database to identify a suspect who allegedly solicited minors online.

Parthasarathy said people may not be aware that they’re allowing tech companies to harvest their data. Companies like Amazon, Google and Facebook have made a fortune by collecting biometric data and selling this information to advertisers.

“There are things that people can do — they can set their privacy settings, they can stop putting pictures on social media — but that’s exceedingly hard to do now,” Parthasarathy said. “I also don’t want to put the burden on people. I don’t think that that’s fair, given all of the things that we have to worry about. I think that this is the responsibility of policymakers.”

ORIGINAL CONTENT LINK

Breaking News: