FYI.

This story is over 5 years old.

Tech

The Long Eye of the Law: So Who's Ready for a 'Minority Report'-Style Future?

So-called "predictive policing" isn't an exact science. But what if it could be?
A face-analysis system developed by Fraunhofer-Gesellschaft, a German research organization, that attempts to identify the gender and mood of individuals.

Police interrogators, like professional poker players, don’t depend on luck for their success. They depend on the ability to read their opponents’ “tells”. Most of us don’t stand a chance against the pros because they’ve learned to spot all the ways our bodies betray us unconsciously: Sweating or failure to make eye contact may indicate a suspect is lying, for example—a weak hand, so to speak.

It’s not an exact science. Interrogators learn to spot these signs through years of training and job experience. But what if it could be? On Monday, Japanese tech developers Fujitsu announced they had created something close: a bit of technology that can measure a person’s pulse using a camera or a computer webcam, just by analyzing that person’s face.

Advertisement

The software works by “measuring variations in the brightness of the person's face thought to be caused by the flow of blood,” the company explains in a press release. Hemoglobin in the blood absorbs green light, and the new software—which doesn’t require any special cameras or hardware—measures a person’s pulse by detecting green light absorption in his face. The software automatically selects “moments when the person's body and face are relatively still,” Fujitsu explains, “to minimize the effects of irrelevant data on measurements.” It takes a little as five seconds.

One obvious application for this kind of technology, the company notes, is for self-health monitoring—another advancement in the so-called “quantified self” movement that would let people use laptops, smartphones, and tablets to easily measure their own pulses. But another clear application is security: The technology could be used at airports, the company adds, to detect “people in ill health and people acting suspiciously.” It’s easy to imagine the technology being used at other security checkpoints as well—border crossings, entrances to government buildings, and other high-security sites like nuclear power plants are a few ready examples.

The basic function of Fujitsu’s technology isn’t entirely new. In 2010, scientists led by a grad student in the Harvard-MIT Health Sciences and Technology program developed a system that also measured blood flow in the face to identify a person’s pulse rate. But Fujitsu’s plans to put its product on the market sometime this year, making it commercially available outside the lab.

Advertisement

At that point, what’s to stop the technology from being used at the work place, in public squares, in train stations, banks, or sports venues? It’s Minority Report-style technology, to be sure—another in a burgeoning list of tech-driven ways police could prevent crimes before they happen. But in our era of spy satellites and drone surveillance—not to mention the Aurora and Sandy Hook shootings—do we really care?

Probably not. And it may not be so practical anyway. Things would get sticky awfully quick if police began apprehending everyone with an excitable pulse. But as one tool among many, it’s reasonable to assume police and others won’t shy away from using it given the chance. In myriad other ways, they're anticipating our actions already.

Per Fujitsu: “The technology starts to work by shooting video of a subject and calculating average values for the color components (red/green/blue) in a certain area of the face for each frame. Next it removes irrelevant signal data that is present in all three color components and extracts the brightness waveform from the green component. The pulse rate is then computed based on the peaks in that brightness waveform” (Figure 2).

If you’re concerned about surveillance creep—and you should be—precedence offers precious little comfort. Cops are developing lots of ways to pick you out of a crowd, literally or otherwise. As Evgeny Morozov noted in the Guardian recently, cities like Oakland, Calif., are already permeated with hidden sensors and microphones that can, among other things, identify gunshots and triangulate their source. The program is called ShotSpotter, and is installed at locations in four countries and 75 US cities. It’s technology that could effectively be incorporated into the broader “predictive policing” trend already gaining favor among law enforcement officials. Morozov writes:

It's not hard to imagine ways to improve a system like ShotSpotter. […] Instead of detecting gunshots, new and smarter systems can focus on detecting the sounds that have preceded gunshots in the past. This is where the techniques and ideologies of big data make another appearance, promising that a greater, deeper analysis of data about past crimes, combined with sophisticated algorithms, can predict – and prevent – future ones. […] It's the epitome of solutionism; there is hardly a better example of how technology and big data can be put to work to solve the problem of crime by simply eliminating crime altogether.

Advertisement

But are such techniques ethical? Police in Los Angeles are already using a sophisticated program that uses stats about previous crimes to tell police when and where to look for new potential crimes. Just last summer, New York’s Mayor Bloomberg revealed a new policing system called the Domain Awareness System, which integrates Gotham’s thousands of closed-circuit surveillance cameras with crime stat databases, license plate readers, 911 calls, and radiation detectors.

As Morozov correctly notes, it’s not a quantum leap from real-time systems like these that track crime to systems that, like Amazon’s recommendation algorithms, get better and better at predicting where and when crimes might be committed and by whom. Those developments will have serious consequences. An algorithm already tested in Baltimore, Philadelphia and Washington, DC, is being used to predict how likely a convicted criminal on parole or probation is to kill or be killed—which, as Morozov rightly notes could one day “influence sentencing recommendations and bail amounts.”

But that’s just the beginning. In the same way that social media has proved to be a goldmine for marketers seeking data about our lifestyles and clicking habits, it has also proved useful to police for predicting crime. According to a report in the San Gabriel Valley Tribune , The Los Angeles County Sheriff's Headquarters Bureau has someone in place to monitor social media sites like Facebook, Twitter and Instagram 24-hours a day. Per the Tribune:

Advertisement

“They're watching social media and Internet comments that pertain to this geographic area, watching what would pertain to our agencies so we can prevent crime, help the public,” LASD Capt. Mike Parker said. “And now they're going to be ramping up more and more with more sharing and interacting, especially during crises, whether it's local or regional.”

Since launching last September, the eight-member eComm unit has identified a suicidal teen on Instagram, intercepted bomb threats made on Twitter and discovered plans for hundreds of illegal drug parties via Facebook, Instagram and Twitter.

It’s nice to think such policing techniques will always be used responsibly. Clearly they are proving effective. But as algorithms get increasingly more complicated and better at predicting, the line between good police work and privacy infringement gets ever blurrier. Just last week, a new study revealed how our Facebook likes can predict with astounding accuracy all sorts of other things about us—things we aren’t necessarily inclined to publicize or emphasize on our own. After crunching the numbers, researchers could tell with over 80 percent accuracy if you were gay or straight; black or white; Muslim or Christian; Democrat or Republican. In making gender distinctions, researchers were 93 percent accurate.

It's hard to imagine new advancements in predictive policing won't lead to more prejudicial profiling, more unwarranted harassment, and a greater erosion of our privacy and civil liberties.

Advertisement

It didn’t stop there. Researchers for the Facebook study, which was conducted at the University of Cambridge in the UK, could tell a lot more things about us than most of us might imagine, let alone like. They could tell with between 60 and 70 percent accuracy whether you smoked, drank, did drugs, were in a relationship, and whether your parents were still together by your 21st birthday. Combing through the vocabulary of the things we “Liked,” they isolated words that were predictive. “The best predictors of high intelligence,” they write, “include ‘Thunderstorms,’ ‘The Colbert Report,’ ‘Science,’ and ‘Curly Fries.’” As for low intelligence, the best predictors were “Sephora,” “I Love Being A Mom,” “Harley Davidson,” and “Lady Antebellum.”

And yet, as shocking as that was, it wasn’t really news. People whose jobs involve figuring us out—from political campaign leaders to marketing firms—have had our number for a long time. Companies like Facebook and Google wouldn’t exist without looking at us through our data. It’s what they package and monetize. Political campaigns have been doing it for years, too. For the 2004 political campaign, the Bush campaign was able to find voters using micro-targeting marketing strategies that could predict with “80 to 90 percent certainty” whether a voter would vote Republican based on the kinds of metrics measured in the Facebook study, like whether or not the voter like The Simpsons. The Obama campaign may have used Big Data even better.

Could police practices that draw from data-driven inferences like these one day lead to better crime prevention? Undoubtedly, yes. But could they also lead to more prejudicial profiling, more unwarranted harassment, and a greater erosion of our privacy and civil liberties? It’s hard to imagine they won’t. Like guns and energy drinks? Maybe we ought to bring you down to the precinct for a chat.

Police harassment notwithstanding, there’s something deeply unsettling about all this. It’s a terrible feeling to feel surveilled—and just as bad to be told we’re so predictable. Predictability belittles the way most of want to feel about ourselves. Maybe there’s no such thing as free will after all, as some neuroscientists, psychologists and thinkers since David Hume have long suspected. But if it’s an illusion, most of us prefer that illusion. We want to feel that we have something a little more exciting to offer—a bit of spontaneity, perhaps, some freshness, some passion. Passion is uniquely human; to feel predictable is to feel dehumanized.

Robots are predictable, and we aren’t robots. Not yet anyway.

Lead image via Kerry Ahern