Algorithmic Bias
An Insurance Startup Bragged It Uses AI to Detect Fraud. It Didn’t Go Well
Lemonade backtracked after suggesting it uses “non-verbal cues” like eye movements to reject claims. Its response raises more questions than answers.
Google’s New Dermatology App Wasn’t Designed for People With Darker Skin
The company trained the system to recognize different skin conditions. But like Google itself, the app's data has a diversity problem.
Proctorio Is Using Racist Algorithms to Detect Faces
A student researcher has reverse-engineered the controversial exam software—and discovered a tool infamous for failing to recognize non-white faces.
‘Coded Bias’ Is the Most Important Film About AI You Can Watch Today
The new documentary is an essential introduction to algorithmic bias—and the systems that gave rise to it.
Underpaid Workers Are Being Forced to Train Biased AI on Mechanical Turk
Workers who label images on platforms like Mechanical Turk say they’re being incentivized to fall in line with their responses—or risk losing work.
Sexist AI is Even More Sexist Than We Thought
A new study shows bias is deeply ingrained in algorithmic models, generating sexualized images of women while creating professional images of men.
Gun Detection AI is Being Trained With Homemade ‘Active Shooter’ Videos
Companies are using bizarre methods to create algorithms that automatically detect weapons. AI ethicists worry they will lead to more police violence.
Facial Recognition Company Lied to School District About its Racist Tech
Documents reveal Lockport Schools' facial recognition tech has mistaken broom handles for guns and has misidentified Black students at much higher rates.
Faulty Facial Recognition Led to His Arrest—Now He’s Suing
Michael Oliver is the second Black man found to be wrongfully arrested by Detroit police because of the technology—and his lawyers suspect there are many more.
Las Vegas Cops Used ‘Unsuitable’ Facial Recognition Photos To Make Arrests
Records obtained by Motherboard show the police department used sub-par images in almost half of its facial recognition searches, increasing the chance of misidentifying suspects.
Police and Big Tech Are Partners in Crime. We Need to Abolish Them Both
Silicon Valley has made billions of dollars empowering the police by pitching surveillance and data analysis technology as unbiased. It’s not.
Over 1,000 AI Experts Condemn Racist Algorithms That Claim to Predict Crime
Technologists from MIT, Harvard, and Google say research claiming to predict crime based on human faces creates a "tech-to-prison pipeline" that reinforces racist policing.