All work, including community management, requires trade-offs, areas of focus, and prioritization. Our teams and resources allow us to increase our areas of focus and more consistently foster the interactions that our communities exist for. But for an organization with the staff and resources of Facebook, you’d expect the trade-offs to be few and far between, and the areas of focus to be vast – covering the areas of the platform prone to abuse just as much as areas that foster healthy interactions.
But for Facebook, Sophie describes how, at least internally, those lines between healthy interactions and “inauthentic interactions” surfaced potential conflicts of interest, slowness to take action, and a tendency to focus on some countries more than others.
When we’re prioritizing what to work on or how to foster our communities, we may reference company values or internal OKRs. But for community professionals, there’s also the question of how does this preserve the safety of the community and those in it? How is Facebook scaling to protect the political safety of its members? Or perhaps a better question is, does it even think it has the responsibility to do so? As Sophie says, “it’s important to remember that, at the end of the day, Facebook is a company. Its goal is to make money. It’s not focused on saving the world or fixing the product. I think it’s important to be cynically realistic about the matter.”
If you enjoy our show, please know that it’s only possible with the generous support of our sponsor: Vanilla, a one-stop shop for online community.
[00:00:04] Announcer: You’re listening to Community Signal, the podcast for online community professionals. Sponsored by Vanilla, a one-stop-shop for online community. Tweet with @communitysignal as you listen. Here’s your host, Patrick O’Keefe.
[music]
[00:00:25] Patrick O’Keefe: Hello and thank you for listening to Community Signal. Last month Sophie Zhang, a former Data Scientist at Facebook, went public as a whistleblower drawing attention to how the company delayed action against or outright ignored manipulation of it’s platform by autocratic leaders and global governments to the detriment of the people of those countries. She joins us to discuss.
Do you enjoy our program? Please consider supporting it on Patreon at communitysignal.com/innercircle. Thank you to our current supports including Carol Benovic-Bradley, Heather Champ, and Maggie McGary.
Sophie Zhang was hired by Facebook to find fake engagement and she did, but when she uncovered it Zhang found that the company was often extremely slow to take action if they did at all. This was especially evident in poorer countries. Zhang’s struggle to get Facebook to take action and her subsequent firing were covered in an excellent story by Julia Cary Wong of The Guardian. Wong’s reporting is referenced heavily in this episode and we’ll have a link to that story in the show notes. It’s worth your time.
One of the foundational things that Zhang exposed is an enforcement gap between how Facebook approaches fake accounts and fake pages. Facebook has teams that look for fake profiles but fake pages don’t receive that attention. These pages are made to look like real people but they aren’t. Then these pages posing as profiles are used to skew public opinion through likes and comments.
For example, Juan Orlando Hernandez, the President of Honduras, was found to have received likes from 59,100 users over a six-week period in 2018. More than 78% of those likes were deemed to have come from phony accounts, except it wasn’t just accounts. Many likes were coming from Facebook pages that were set up to look like a person. They found a single Facebook account that was controlling hundreds of those pages in addition to the official pages for both the president and his late sister.
Yet, initially the right teams at Facebook didn’t take action because there wasn’t an appetite for fixing abuses by the president of a poor nation with only 4.5 million Facebook users. Other things were more important. It was only after persistent efforts including a post by Zhang on Facebook’s internal communications platform that anything happened. Even that action was almost one year after Zhang had initially raised the alarm. This wasn’t a unique case.
When she was fired from Facebook after two-and-half plus years just before her access was removed, Zhang posted a memo to Facebook’s internal team forum. In the memo, excerpts of which were covered by BuzzFeed in September, Zhang chronicled her time at the company and provided examples of abuse of the Facebook platform, partially in the hopes that those left behind would make progress on these issues.
At one point in the memo, Zhang references the choices she had to make to prioritize some abuses of the platform over others, “I have made countless decisions in this vein from Iraq to Indonesia, from Italy to El Salvador. Individually the impact was likely small in each case, but the world is a vast place. Although I made the best decision I could based on the knowledge available at the time, ultimately, I was the one who made the decision not to push more or prioritize further in each case and I know that I have blood on my hands by now.”
Sophie, welcome to the show.
[00:03:19] Sophie Zhang: Thank you very much for having me. It’s a pleasure to speak to yourself and to your audience.
[00:03:22] Patrick O’Keefe: It’s my pleasure to have you. You’ve been doing a lot of big press, [chuckles] so I appreciate the opportunity to chat with you and focus on some topics that I know my audience and people who really do community moderation, trust and safety focused work will appreciate. Before we really get into it, I want to set the table a little bit because your expertise, at least at Facebook was inauthentic activity. That term can be interpreted in a vague way, in different ways. Some might take it to mean misinformation, for example. Can you talk a little bit about what inauthentic activity is or was for you and why you were hired at Facebook?
[00:03:53] Sophie Zhang: Absolutely. There might be better words. These are the words that we used at Facebook. At Facebook, the words “inauthentic behavior”, “inauthentic activity”, these are used to mean the use of inauthentic accounts. That it’s the account is faked, it’s hacked, it’s self-compromised in which you gave over your credentials to someone else, et cetera.
This is commonly conflated and confused with misinformation, but they’re actually completely separate. What I mean by that is that suppose a politician say that the moon is made out of cheese. This is a case of misinformation. I hope people can agree on that. It’s by a real account that is completely real and authentic, but it is still misinformation, just like if a robber robs a bank without the mask, he’s not hiding his identity but he’s still robbing a bank.
On the flip side, if I set up, say 100 fake accounts and use them to say, “Cats are adorable,” this is a factual statement. It’s not misinformation and definitely not biased because cats actually are adorable, but it’s still using fake accounts. It’s inauthentic behavior, inauthentic activity regardless of what I’m actually saying. It can say, “Cats are adorable, donate to the safe cat’s fund,” and that’s ostensibly a good sentiment, but I’m still using a bunch of fake accounts to mislead people to do it, and so it’s not permitted. That’s what they mean by inauthentic activity and inauthentic behavior.
[00:05:18] Patrick O’Keefe: Okay, got it.
[00:05:19] Sophie Zhang: My specific job was that I was on the fake engagement team, and by fake, I mean inauthentic accounts, by engagement I mean, likes, comments, shares, and et cetera. When the average person hears this, I don’t know how average your audience is, when the average person hears this, their minds immediately go to Russia, foreign interference, trolls, et cetera.
The thing to realize is that most people are not politicians, most people have lives that revolve around things that are not politics. Most inauthentic activity is actually more just, everyday people like you and me, who go on Facebook, who go on Reddit, they make a post on Reddit, they make a post, it gets 50 upvotes and they’re like, “Okay, that’s okay I guess.” Then their friend text them, “Hey you, I made a post that got on the Reddit front page, I have 10,000 upvotes.”
They look at the post, “That’s just as good as my post. Why did their post get 10,000 upvotes and I only got 50?” Because the problem with social media is that you have this vast audience, you can speak to the entire world and the curse of social media is that almost no one actually gets that fast audience. The way they’re set up, you see what’s popular, and so necessarily, you see people who are more popular than you are, which I’m sure that’s not good sense to your psyche.
In the same way that if you look at the news, you see people who are more famous than you, but on the news, it isn’t your friends, and et cetera. Anyways, they do things, like they go on Google and they search by Reddit upvotes, by Facebook likes, and this will certainly solve the problem for them.
That’s what my job was and this was essentially considered a spam team at Facebook. By spam team, it was focused on activity that was high volume and not that impactful individually, and focused on using automated remedies to it, machine learning, et cetera. The political work that I was doing that has attracted a lot of attention, I was essentially doing in my spare time. It was mid-’19 and it is essentially a second job I was doing for free.
There are actually people at Facebook who work on this, but they’re under a separate structure. These are, for instance, intelligence teams who they’re under the security and integrity investigations. They’re in the same organization as say, malware or espionage, who are those people when foreign spies are watching their accounts, and that sort of thing.
To the average layperson, this might seem extraordinarily similar, but I’m going to use an analogy now. It’s like the difference between a policeman for a tiny village in Wyoming, sorry Wyoming, and the FBI. These are both law enforcement and the average person may think of them as similar, but the FBI agent might get very annoyed if you compare him to a tiny Wyoming cop. He’s like, “They’re just a Keystone cop, I’m FBI.”
It’s different in terms of scope, prestige, importance, access, training, et cetera. In this analogy, I was the local policewoman who kept finding things that went up to the FBI, which had interesting internal political fault because on one hand, the FBI was happy that this person is doing the job for them, on the other hand, they’re upset that she’s doing the job for them. Are we not good enough? Why is this Keystone cop throwing us up? Meanwhile for the local police chief is that, “Why are you doing this work for the FBI? Why aren’t you focused related to crime?” That sort of thing. I hope this is making sense.
[00:08:33] Patrick O’Keefe: It is. This behavior that you uncovered with Facebook pages being used as fake profiles essentially. What you discovered was not only that they were engaging in these tactics, but that they were doing it pretty obviously, like pretty in your face. You told The Guardian, “What we have seen is that multiple national presidents believe this activity is sufficiently valuable for their autocratic ambitions that they feel the need to do it so blatantly that they aren’t even bothering to hide.”
How obvious did they make it? How easy was it to sort of recognize what they were doing?
[00:09:05] Sophie Zhang: It was exceedingly obvious internally at Facebook. What I mean by that is this. If you have a single person controlling a network of fake accounts, it takes effort to find out who’s behind the fake accounts because you have to look at infrastructure details, I’m not going to get into the gist of it. Anyways, if you have a single person who is controlling 100 pages using the actual tools, it’s already recorded who’s controlling all these pages.
A smart person would set up fake accounts to do this for them so that they aren’t tied directly to it themselves. I would say a stupid person, but you don’t become a national president by being stupid. A latent person would have official entities like their own employees or their own political party accounts writing this without any obfuscation. Because if you are the page administrators for a national president, specifically is the first person, you’re great, you have access, you can make statements on his behalf. If you’re also in your spare time, also administer hundreds of pages pretending to be fake people who you use to make like, say nice things about the national president on his page, this is exceedingly obvious at Facebook’s back end. It’s not obvious to the public because we don’t show who administers pages because privacy considerations, and et cetera. Internally at Facebook, it’s not like the argument’s really hard, they’re doing all our work for us.
It’s like the criminal who signs their name in blood and saying, “Yes, I did it.” Holding up a green or glowing neon sign or something, it’s that obvious because all of this is unusual behavior that would never happen. As for externally, I can’t say how obvious it was. I always thought that it was exceedingly obvious. That’s how I found it. I took it, look it and they realized that something was going on. It might be the same but as soon you know something it seems obvious, but the trick is knowing that it’s there.
[00:10:56] Patrick O’Keefe: It’s interesting how obvious it is from Facebook’s side to piece a lot of these things together because when you describe it, obviously, the average small community doesn’t have the bandwidth that Facebook does or the tools or the behind-the-scenes data. We have simple things that allow us to see sometimes when people are posting from the same place or when they use similar language, or when they share the same link, or when they do the same things.
These obvious things would often be something that a smaller community would take action against. In this case it feels like those folks, they either don’t think Facebook or the only thing that they’re almost daring it to happen, which is an interesting thing. There was the video for The Guardian report that had a Facebook post from an independent Azerbaijan news site and they had the top 301 comments, and 98% of them were Facebook pages meant to look like profiles, meant to look like real people.
As a single post in the breath of Facebook, that’s not a big thing. Just for the news outlet, if they were to dig into their comments as they might, it would be obvious to them. It’s not even to say it’s a small number of content because in a 90-day period in 2019, the Azerbaijan efforts created around 2.1 million comments that were posted to the Facebook pages of opposition leaders and independent media outlets that would generally praise the country’s autocratic leader and his party. They would call his critics traitors and really try to manipulate public opinion on these subjects. You’re talking about in the breathe of Facebook, maybe not a massive amount of volume, but still on it’s own to most websites, communities, platforms a pretty meaningful group of content.
[00:12:30] Sophie Zhang: Absolutely. The sheer scale of the activity in Azerbaijan was really what got me because it takes effort to write so many comments individually unique. It can be easy to write a single comment on a subject, but to write hundreds and thousands of comments every day on the same thing over and over again that gets exhausting, that’s gets tiring, and et cetera.
I never quite understood why the press in Azerbaijan didn’t pick this up and report on it. Maybe the domestic opposition got used to it. It wasn’t just the domestic opposition, it was, for instance, BBC Azerbaijan that also faced this treatment. It always surprised me that they didn’t just notice it and have the BBC write a news article on it. As to why that is, I can’t say.
It might be a combination of learned helplessness, maybe they didn’t want to rock the boat too much with the government, maybe they didn’t think that Facebook would do anything. I don’t know. Maybe they didn’t even notice because they got so used to it. It seems very obvious to myself, but that’s because I know what to look for. You do say that this is a small problem within Facebook, but I’m going to give you a number that was very shocking to myself.
This Azerbaijan network, it comprised 3% of all comments by pages on other pages through the entire world. Entire world, civic or non-civic, political, or apolitical and Azerbaijan is, of course, a tiny country. Somewhere at Facebook, I’m sure there was a team whose numbers was to make page activity go up and they were congratulating themselves on the comment numbers. Then they were panicking once after October, they did a take down, “Why did our numbers suddenly go down? Oh, it went back up the next day again.”`
[00:14:06] Patrick O’Keefe: That is an outsized contribution from Azerbaijan. Our podcast right now, I’m looking at our analytics, not live but in the last week, and we have this spike of Chinese activity. It’s an unnatural a level of volume to the podcast, let’s say for the number of people who usually listen. I can basically segment out. I see this big thing and China isn’t usually 90% of our podcast audience, [chuckles] so I noticed it because that’s not usually how much that country’s listening. It’s the same thing here, 3%. That’s a lot of content.
[00:14:37] Sophie Zhang: It’s just the specific network, political or not political. Political activity’s only like less than 1% of activity on Facebook and Azerbaijan is, of course, much less than 1% of the world’s population. Sorry to Azerbaijan. [laughs] This was exceedingly disproportionate and the fact that this actually was actually percentage points in terms of all activity on Facebook that always got to me as I realized just how big this was.
[00:15:04] Patrick O’Keefe: Let’s take a break to talk about our generous sponsor Vanilla.
Vanilla provides a one-stop-shop solution that gives community leaders all the tools they need to create a thriving community. Engagement tools like ideation and gamification promote vibrant discussion and powerful moderation tools allow admins to stay on top of conversations and keep things on track. All of these features are available out of the box, and come with best-in-class technical and community support from Vanilla’s Success Team. Vanilla is trusted by King, Acer, Qualtrics, and many more leading brands. Visit vanillaforums.com.
Now, you mentioned earlier, spotting people based upon them managing the same page, which is just obvious in your face as blatant. I was just curious. I think you said you don’t want to get into the inner workings too much, but what are some of the other ways, if you could talk about them just at a high level that you identify networks, right?
You’ve got the commonality of the page managers, on a smaller community side, sometimes we look at IP addresses, but doesn’t make any sense for Facebook, probably. Are there certain tools that you have, or comparing posts, comparing languages, comparing what people share that helps to identify or was helpful even in identifying spam or these efforts?
[00:16:08] Sophie Zhang: I’m not going to get into the specific details of what I did for a simple reason, which is that the governments of Honduras, and Azerbaijan, and et cetera, are perfectly capable of listening to your podcast as well. That’s part of the difficulty in security and by a lot of collaborations are private, because if you openly talk about how you find things, the bad people will not do that anymore.
Like commonality finders are something that Facebook definitely uses, but they can also regarding problems. For instance, one example that I remember because it was so funny, this was before my time. Like you said, it can be suspicious if everyone is saying the same thing at the same time, but they can also be completely legitimate reasons for them saying the same thing at the same time. Like, for instance, a few years before I joined, Facebook externally blocked actually when people were saying Happy Thanksgiving. Because, “Oh my God, everyone’s saying happy Thanksgiving, there’s has to be something weird going on. Oh wait.”
This was an automated system, of course, and they looked at it and they were like, “Wait, this doesn’t make sense,” because that’s what happens with automated system. It’s automated systems. They won’t be perfect. Humans won’t be perfect either, mistakes always happen. At the company at the size of Facebook, most enforcement is automated. That’s a lot of mistakes, just warning-wise and it’s easy to take it out of context. Facebook was probably relieved that people weren’t like, “Facebook is canceling Thanksgiving now.”
[00:17:31] Patrick O’Keefe: Right. There was a statement that Facebook issued to the Guardian in the piece and there was something in that caught my eye. It was pretty what you’d expect. We aggressively go after abuse around the world and have specialized teams focused on this work. The sentence that caught my eye was, “As a result, we’ve taken down more than 100 networks of coordinated inauthentic behavior.” Just as an outsider, 100 networks over Facebook’s history, that doesn’t sound like a lot. What does that number say to you?
[00:17:56] Sophie Zhang: Talking about it more broadly. Facebook has made the decision that when they take down networks for coordinated inauthentic behavior, they publicize it, they’re transparent about it. They make announcements about it in the press. This has a side effect that it’s very hard to take things down for coordinated inauthentic behavior, because Facebook wants to be cautious with audit, because Facebook wants to be careful and slow and not accidentally point the finger at the wrong person, because if you say, X did it and it was actually Y, that’s not a good result.
The analogy I’m going to use is that it’s like trying to get something prosecuted for felony versus a misdemeanor charge, because it may create to try and incentives to take a sense down for reasons that aren’t coordinated in the authentic behavior. Like at Facebook, I find something like three dozen cases in different countries throughout the world, and most of them were taken down for reasons that weren’t coordinated inauthentic behavior.
They were taken down for inauthentic behavior authenticity reasons, or et cetera, because it was much easier to do that because coordinated inauthentic behavior necessarily took me shouting at them for almost a year before they started investigating and then another few months for them to finish the investigation. The 100 number they provide is really the tip of the iceberg as a result, because I mean, the distinction is often pretty arbitrary.
The analogy I’m going to use is that, for instance, jaywalking is illegal the United States, but the police don’t arrest everyone for jaywalking. There are plenty of people who technically jaywalk, but now aren’t arrested and rather the police are very selective about who they arrest for jaywalking. They probably don’t get the rest of you unless maybe you’re jaywalking and cause a giant ten car pile up in a traffic jam, then they might arrest you.
[00:19:33] Patrick O’Keefe: There might be someone on this podcast right now who technically jaywalks fairly regularly. I’m not going to say who, it might be me.
You just spoke a little bit to how Facebook publicizes some of these cases. To me, that speaks to prioritization, what they choose to do, what they choose to talk about. Another one of the key insights that if I picked up from what you’ve shared, which it’s about how they prioritize what they will take action on. For example, when you flag the Honduras network of fake pages, according to the Guardian report, “It quickly became clear that no one was interested in taking responsibility for policing the abuses of the president of a poor nation, which has 4.5 million Facebook users.” What motivates Facebook to take action in these cases?
[00:20:15] Sophie Zhang: I think it’s important to remember that at the end of the day, Facebook is a company. Its goal is to make money. It’s not focused on saving the world or fixing the product. I think it’s important to be cynically realistic about the matter. What I mean is that we don’t expect Philip Morris tobacco to pay for its consumers when they get lung cancer. We don’t expect Bank of America to keep the world financial system from crashing.
Because at the end of the day, Facebook is just another company. To the extent it cares about countries like Honduras, it’s because A, this might enter the press and cause bad publicity and cause us to degrade our ability to make money. B, because the people who work at Facebook are humans and need to sleep at the end of the night, which is, of course, how we’re going to achieve that is up to them.
One distinction that they want to draw is that most of the Facebook’s investigations and work on coordinated inauthentic behavior, they come in response to outside reports. What I mean by that is, NGOs doing investigations, news organizations giving reports, opposition groups complaining or, et cetera. When there is an outside figure that’s feeding this to Facebook, that’s someone outside the company who can put the pressure on Facebook, who can say, “Well, if you’re not going to do anything about this, we’re going to the New York Times and tell them you don’t care about our country. What do you think about that?” Then suddenly, Facebook will decide to get their act together.
In Honduras, in Azerbaijan, in other cases, I was looking proactively for this. I was not looking in response to any outside reports. I was looking throughout the entire world for bad activity and this is what jumped out and I looked more into it and it was very bad. My loyalty was theoretically to the company. Because there wasn’t anyone outside the company to put pressure on it, I think that led to a situation in which a company didn’t have strong incentives to act.
The argument that they always used internally was that this was so obvious that people would notice sooner or later. When they did notice, Facebook has so many leaks. It would get out sooner or later. If it was publicized at Facebook, I don’t know about this for you inside on it, it would be absolutely awful for Facebook. They would get killed in the press.
Of course, I was the one who leaked it. It became a self-fulfilling prophecy, but I didn’t know that at the time. Ultimately, I think that what’s going on that Facebook didn’t have any strong incentives to fix it, except the goodness of their hearts and the prospect of it may be eventually being noticed. The thing about preventative things is that people often don’t care about until it becomes a crisis, by which point it’s too late.
[00:22:48] Patrick O’Keefe: The damage has been done. Perhaps the most interesting backwards thing that I learned from reading the coverage around your experiences is how there is apparently no meaningful separation between the people who decide how to apply these policies and the people who work to maintain good rapport with government officials. Oftentimes, some of these decisions are made with the thought that, “We don’t want to disappoint this person. We don’t want to anger this person, we’re trying to be friendly with them.” Can you talk about that dynamic a little bit?
[00:23:16] Sophie Zhang: Absolutely. This at Facebook, I guess this was more just taken for granted I feel, because it’s just the way of doing things. I do want to highlight that it wasn’t something that was brought up that often, rather, it was more like the elephant in the room that everyone knew but didn’t mention. Political considerations were at play, but it wasn’t publicized and often not even mentioned in discussions. Like the examples mentioned in The Guardian, those were very rare because it was directly mentioned.
I’m going to use an analogy that I hope that you don’t find offensive. It’s that, as a podcast creator, for instance, your goal is ostensibly only to tell the truth to your audience and to tell people what’s going on, et cetera, but you also care about how many people listen to your podcast. You also care about getting attention and having podcasts that draw more of this news and, et cetera, but you doesn’t necessarily say that to your audience or people you try to recruit to your audience, you say, “This is a good way to get the message out. We need to tell the truth, et cetera.” I hope this isn’t too offensive or something.
[00:24:15] Patrick O’Keefe: No, no, it’s not offensive, not offensive. I would love more and more people to listen to the show. Why not?
[00:24:20] Sophie Zhang: That’s the analogy I’d use because, ultimately, people didn’t like to talk about it. There were definitely a lot of cases in which it was definitely implied or clear that that’s what’s going on. I wanted to be clear that good relationships with government weren’t the only consideration because, of course, Facebook made enemies with the governments of Honduras and Azerbaijan, essentially might be half, which both highlights the influence and power of Facebook that they’re able to make enemies with two broad governments and oppose them. It’s small governments and also the independence that they’re willing to do so. At the same time, in larger countries like India, these were more of a priority to Facebook. India, of course, has a very large population, more than one billion people and it’s quite important to Facebook and to the world, in general. This meant that cases in India got more priority, but the priority was often not what I think people in those countries prefer.
In India, when I found a network of fake accounts that were supporting a political figure, we had gotten sign off to take it down and everything, but suddenly, we realized that the account was directly tied to and likely run by that political figure himself. This was a member of the Indian Parliament, he himself or someone close to him was happily running several dozen fake accounts to support himself.
After that, suddenly everything stopped because I asked repeatedly for a decision, even if they said, “No.” I asked, “Can we come to a decision? Can we say something about this? Can we do something?” The result was always silence. Even when I was already in a conversation about something else, about a different case, I would say, “By the way, are we doing this? Can we come to the situation here just to be fair? It would look very bad if we acted here and not there. We would look like we were selectively enforcing and being biased.”
They were happy to talk about other case, but it was always pointed silence when I brought this up, which I think it’s a good example of the way political considerations can come into play at Facebook. They didn’t want to say, “No, don’t do this,” because, of course, they would look awful, they would look very bad. At the same time, they didn’t want to say, yes, because presumably political considerations or something, and they had implausible deniability.
I mean, sometimes when you send people a message, they just don’t read it. You send people an email, they don’t respond, maybe they didn’t see it. When this keeps going on, when you’re already in a conversation with them and you’re talk about A and ignore you when you bring up B, then it’s very clear that something is going on. They still have implausible deniability that maybe everyone just didn’t hear or something.
Of course, I was very upset about this case. I mean, to me it made no sense that the politician was tied to a network of fake accounts was reason to stop. For me, it was more reason to action on this. Of course, if he complained, what was he going to do? Complain to the press, “Hey, Facebook took down my fake accounts.” He would get laughed out of the [inaudible].
[00:27:13] Patrick O’Keefe: [laughs] Right.
[00:27:14] Sophie Zhang: Ultimately, I think that regardless of whether he could complain or not, it would create relationships with that political figure and his political party. I think that’s the consideration that Facebook made. On this specific case, Facebook gave a lot of statements, they changed the story like three or four times, I think. You can see the details in The Guardian article on India.
Eventually, they said that half-a-year after I raised this, they took this down without telling me. Which I can’t disprove because I mean, they said that they did it without telling me, how can I say it didn’t happen. I do want to highlight that even if you believe everything that Facebook says, they still waited half-a-year to do this.
[00:27:52] Patrick O’Keefe: When you find yourself in a position where someone doesn’t want to say yes, and they don’t want to say no, then discussion just continues, I assume and no action gets taken. That’s not an uncommon thing with even smaller communities, organizations. I’ve moderated and worked in community yet B2B communities where they had members that they liked more than others, or they had influential members who, yes, they acknowledge, “This was a policy violation.”
It’s almost like if you took some action, that’s okay. You can take some action against it, but don’t make a big deal out of it. Also, we don’t really want to know about it, we’re not going to give you an answer. At the end of the day, we don’t want responsibility for this and that’s incredibly frustrating and leads to cases where no action gets taken for six months for years. By the time you do that, in some cases, I assume it’s in the moot. I assume a lot of the damage has already been done and those people have already been successful in achieving whatever their ends were for that manipulation.
[00:28:45] Sophie Zhang: Absolutely, and not just that but in Honduras and Azerbaijan, they came back immediately afterwards and did it again and Facebook didn’t stop them. I mean, it’s still going down in Azerbaijan. Like the analogy I’m going to use is that, suppose the punishment for robbing a bank is that you have your bank robbery tools confiscated, and there’s a press release, “Hey, this person robbed the bank, they shouldn’t do it.”
Someone robs a bank, because the tool was confiscated, they use the money to buy more bank robbery tools and rob the bank again. [laughs] This seems like an absurd example, but it’s what’s going on at Facebook. The idea of publicizing this is to embarrass people. In Honduras, the president of Honduras, he sent soldiers into the streets to shoot civilian protesters in 2019, after the police went on strike and refused.
Basically, his brother was sentenced to jail by American court for helping his brother smuggle drugs and take bribes from El Chapo. This is a man who’s incapable of embarrassment. Like in Azerbaijan, in 2013, they accidentally released election results the day before the actual election true story, which was shocking. Compared to that, what’s this going to do to them?
[00:29:54] Patrick O’Keefe: Yes, very true. We’re talking about cases and when we have this conversation, rather, people are going to think about like inaction, not taking action against something. There are examples where the publicist of something, the reason that one might want to take action. One of the benefits of doing so is that they want to draw attention to the fact that they took action against this person and make it look like they’re taking action against both sides or to give that picture.
There was a case in 2019, where some Facebook staff were talking about drawing attention to the fact that page is affiliated with the opposition leader in the Philippines were benefiting from some fake engagement, even though it wasn’t clear if the leader and their people were actually involved. They were considering this because Facebook was being criticized by the current president, then Rodrigo Duterte, and announcing action taken against his opposition could be a mea culpa.
Like he’s unhappy with Facebook, so if we publicize that we took action against someone who he doesn’t like, then obviously like that might make that better, which is, suffice to say, not the way to apply policy. You successfully argued at the time that it could just be Duterte or his supporters purchasing fake engagement in order to incriminate this opposing politician.
[00:31:01] Sophie Zhang: Yes. That’s part of the difficulty with this policing, you know who’s benefiting, but you don’t know who’s responsible. I’m going to use an analogy. Suppose that tomorrow, you, Patrick O’Keefe, get a package from Amazon filled with counterfeit money delivered to your doorstep. Obviously, counterfeiting money is illegal. Using counterfeit money knowingly is illegal, but it’s not illegally to accidentally use it or accidentally get it or something.
The question is, who is responsible for this? The average person might’ve seen that you’re responsible, but there’s lots of explanations. Maybe you had a friend who decided stupidly that this would be a great gift for you. Maybe the counterfeiters got the wrong address. Maybe the counterfeiter has shipped counterfeit money to a bunch of random addresses to cover their tracks.
So that when they were pinned, they were like, “No, it’s not us. These people also got it,” and they weren’t responsible through it, they’re just shipping it random addresses. Maybe you had the rival at a different podcast who did this and his bodyguard is police and say, “Look, this podcaster is a counterfeiter kingpin.” That’s the difficulty I want to draw attention to, that a lot of the time, you don’t know who’s responsible. Only people who can know are Amazon in this analogy or the intermediary, whoever it is and they’re not telling you.
I worked on a lot of cases globally, but I’ve tried very hard to only talk about the specific details in cases in which we were certain who was involved as a result. Because otherwise, if I say that, “Patrick O’Keefe received some counterfeit money and spent it,” this can mean a lot of things. Maybe you got a $1 bill in change from a store and it was counterfeit and didn’t know and they spend it. Well, maybe you are counterfeiting kingpin, but people will assume that he did something bad if that happens. I hope this is making sense and I’m sorry for using you as an example.
[00:32:45] Patrick O’Keefe: No no. It’s fine. If I got a box of money, I should turn that in, but if I got it from change then, yes, that’s totally accidental. Jaywalking maybe, no counterfeit bills here.
One of the things people say about Facebook when they’re defending Facebook is, “It’s so big. It’s too big to manage. It’s too big to moderate successfully. It’s just out of control. It’s got billions of people. While we’re talking, there was 15 million pieces of content, whatever posted, some massive number, how can they do it?”
When I take away from talking to you, is that on one hand, it’s hard because there is so much data because you don’t know the intentions of everyone or the motivations of everyone. While this person might appear to be benefiting from abuse, it doesn’t mean that they’re perpetuating the abuse. You really have to go the extra mile and dig into the data and make sure before you assign guilt to someone that you actually have proof that they are the ones causing this issue, not just benefiting from it.
On the other hand, the other thing I take away is that, Facebook could do more in areas where they know people are guilty of something, not in a court of law, but guilty in an action. They just choose not to because of prioritization, PR, some other controlling factor where they know this person is violating their policies, or they know that person’s exploiting a loophole, but it’s just doesn’t make sense for them to take action because there’s no immediate threat to them in their business at the time. That’s both sides of it, right? It is hard, but it could be done better. Is that a fair approximation?
[00:34:11] Sophie Zhang: Absolutely. The problem is hard, but the hard part is finding who’s responsible, finding the cases in which people are doing things that are wrong. That’s why I want to highlight my experience because I did the hard part for them. I found the people who were guilty and responsible and they weren’t even trying to hide. They still took more than a year for Facebook to act in Azerbaijan and almost a year for them to act in Honduras. That extra time is on them.
It’s like if a police officer says, “Policing is hard. It takes time to make sure we don’t accidentally arrest a wrong person.” If someone signed their name in blood on the victim and the police took a year to arrest them, no matter how hard you say policing is, it’s still very clear that for whatever reason, they just didn’t want to arrest this person or they didn’t care about arresting him. That’s part of the issue at play here with inauthentic behavior, because this is going to be as very obvious as soon as I say it but it’s an important point. The point of inauthentic behavior is to not be seen – you run fake accounts and go on to appear to be real accounts. The better you are at not being seen, the fewer people will see you. When the average person goes out and looks for people who do not want to be seen, they find people who are terrible at not being seen, or they find people who are pretending to hide when they actually want to be seen. Like they were doing shoddy disguises because they want to be caught.
Facebook naturally prioritizes based on public attention. For other cases, this might make sense. Like for misinformation, if someone says that the moon is made out of cheese and no one cares, that’s not really that bad. If a politician says the moon is made out of cheese and causes an angry mob to storm Cape Canaveral to fly to the moon and eat the cheese, that would get a lot of press attention and it is very bad.
For inauthentic activity, what gets attention is what’s actually the worst and that creates perverse incentives, because Facebook naturally focuses on what gets attention. For inauthentic activity, often that isn’t bad at all or are people pretending to be fake when they’re actually real.
This happened in Britain actually. There were Boris bots case in which everyone was worried about potential inauthentic activity in British elections. They turned out to be actual Britainers who for some reason thought it would be funny to pretend to be badly-disguised bots, which does raise philosophical questions, is it fake for you to pretend to be badly disguised as yourself?
I think that’s a good example because it duplicates it– I originally investigated that twice at the start after I stop paying attention, but like Facebook urgently investigated something like six times every hour because it kept showing up in the press. I suppose Facebook didn’t want to make a statement because no one would believe them.
I think it highlights some of the perverse incentives at play, that Facebook paid so much attention to this when it wasn’t actually bots or fake accounts or foreign interference as people were worried about at all. I do also want to highlight the statement that Facebook usually gives in response to this, which is that they don’t answer the question itself. They say, “We do X, Y, Z. We do blah, blah, blah.”
I’m going to use an analogy. Suppose your spouse asks you, “Patrick, did you do the dishes last night?” You respond by saying, “I always prioritize doing the dishes. I work hard on doing the dishes every time so that we can have clean dishes. Food left on dishes is disgusting. I make sure that we don’t [inaudible] someone [inaudible] from not washing our dishes.” That might all be true but you did not actually answer the question which is, “Did you do the dishes last night?”
That’s the typical response that Facebook gives and if you look at the article, that’s essentially what they’re doing. Because they’re not denying what I’m saying, they can’t deny what I’m saying because they know I’m telling the truth. At the same time, they want to imply that they’re not. They want to give an answer because they can’t say, “Yes, this is all right,” and so they give a extraordinary long answer that doesn’t actually answer the question.
[00:38:03] Patrick O’Keefe: Jaywalking, counterfeit, dishes, [laughs] I’ve got some work to do around here.
[00:38:09] Sophie Zhang: I am sorry for using you as an example.
[00:38:11] Patrick O’Keefe: Not at all. Sophie, it’s been a pleasure to have you on. Thank you for exposing this. Thank you for taking the time to break it down for us on the show.
[00:38:18] Sophie Zhang: Absolutely. I hope you found this helpful and informative. Let me know if you have anymore questions or things I can help you with.
[00:38:26] Patrick O’Keefe: We’ve been talking with Sophie Zhang, former data scientist at Facebook. Follow her on Twitter @szhang_ds. That’s S Z H A N G underscore D S.
For the transcript from this episode plus highlights and links that we mentioned, please visit communitysignal.com. Community Signal is produced by Karn Broad and Carol Benovic-Bradley is our editorial lead. Bye for now.
[music]
If you have any thoughts on this episode that you’d like to share, please leave me a comment, send me an email or a tweet. If you enjoy the show, we would be so grateful if you spread the word and supported Community Signal on Patreon.
One comment