Twitter bots can fight racism — if they’re white and popular
This segment originally aired Nov. 29, 2016, on VICE News Tonight on HBO.
If you’re black and want to ask a white person to stop being racist on Twitter, it may be more effective to have a white friend do it for you.
That’s what the data indicates in a recent paper published by NYU political science graduate student Kevin Munger. He conducted an experiment in which he tracked white Twitter users who were calling people the n-word. He then created four accounts: two with a white cartoon face and two with a black one; one of each had 500 followers, and one of each had only two followers.
Whenever a subject used the n-word again, he had one of the accounts send them a simple tweet: “Hey man, just remember there are real people who are hurt when you harass them with that kind of language.”
In general, only users who got a tweet from a white account with hundreds of followers reduced their usage of the slur. Black accounts, on the other hand, were unsuccessful. And one set of subjects actually tweeted out more racist comments after being gently nudged by a black account with only two followers.
I called Keith Munger to ask why white people using racist language appeared to listen only to white Twitter bots.
[This interview has been edited for clarity.]
VICE NEWS: How did you conduct the experiment?
Keith Munger: The most important part was finding the subjects. Basically, I was finding people who tweeted the n-word. Then I scraped their history, making sure they were doing this repeatedly. Then I looked at the actual tweet I found using it, making sure they’re using it in a harassing way, and not as a jest between friends. Also, another condition was that the potential subject was a white man. Then I had four types of bots: white or black, with a few followers and a lot of followers. I tracked the users’ behavior for two months to see how it changed.
What did you expect to find?
The theory I was testing suggested that tweets from white bots — which for white people is their “in-group” — have a perceived shared identity, and that they would have the largest effect. And that’s what I found.
What is it about black bots that made them less effective?
There are two things going on here when I send that tweet. First, I’m getting them out of their social group identity, and reminding them of their individual identity. As individuals, they know it’s wrong. Second, I’m showing them that their behavior is not appropriate for the group that they see themselves as belonging to. And my experiment found that the second part didn’t work as well with black bots.
So if you’re asking white people to stop being racist, it’s more effective to use a white face?
Right. The lack of an effect from the black bot is because the subject doesn’t feel like they have anything in common with a black person.
Some of the people you tracked were anonymous, but others included identifying information in their Twitter bios. And you found that non-anonymous people started sending out more racist tweets after a black account asked them to be nice. That’s a little surprising, since they should be afraid of being exposed, right?
It was contrary to my expectations too. My hypothesis was that the opposite would be the case. I thought that people would be ashamed, but they weren’t.
Why do you think that is?
There are two categories of harassers. The first is people with anonymous accounts, coming from 4chan or somewhere like it, who are out to make people angry. When they get pushback in a civil way, they are likely to feel bad, and say, “This isn’t fun anymore.” But then there’s another category. Being abusive, this is something they care about, it’s actually their identity. So it’s not as easy to change their behavior by reminding them of their non-online persona.
Your experiment reminds me of a study done in Miami, where researchers spent months studying transphobic attitudes. They knocked on doors and had 10-minute interviews with people, and found out that those short conversations reduced subjects’ prejudice. Your study didn’t quite work out that way all the time. What’s the difference?
The mechanism that’s behind real world beliefs is different from online behavior. In online behavior, it’s stemming from social norms. If my bot in the experiment is working, it’s because it’s promoting different social norms. That’s why you get different effects from the different race bots. Getting a message from someone in your in-group is telling you what is normal for people like you. But if the person is not like you, that doesn’t tell you anything about what people like you should be doing.
Your study is also pretty timely, given that Twitter has been publicly struggling with dealing with harassment.
Yeah, there’s a lot of interest in calling out bad behavior online. But I think sometimes people are doing it counterproductively. Given the way social norms work, it’s important to emphasize the social similarities between yourself and the person you’re trying to influence. You have to convince them that you’re on the same side. Once you have that sympathy, they’re more likely to listen to what you have to say. If my bots had said, “You’re racist and that’s bad,” that would emphasize the difference, and they’d be less likely to be receptive.
Some people would feel weird about being asked to be “nice” to people who are being racist. But you’re saying strategically, approaching someone gently is better.
I believe so. But I would say that you can’t really draw that conclusion from this study. I didn’t test for the difference in reactions of a gentle message and a harsher message. I’m actually working on testing that experimentally right now.
So what would a Part 2 of this study look like?
I want to apply this study to online arguments. The experiment I’m working on now found people who were tweeting at Trump or Clinton a month before the election, and looked at the people who were responding. I’m interested in that because in general, exposure to extreme disagreement makes you less likely to engage in a conversation, because it denies someone the right to have an opinion.