But what would happen if you ran your moderation tools against URLs shared in link-in-bio services used in your community? Or what if you learned that folks on your platform were using specific codewords to circumvent word detection? Or posting screenshots of misinformation rather than using plain-text? People are getting creative with how they share all types of information online, misinformation included. Are our moderation strategies keeping up?
So, where do we start? How can we detect misinformation if people are using codewords like pizza or Moana to get around our tools and teams? There may not be precise solutions here just yet, but Rachel and Joseph both offer ideas to help us down the right path, which starts with deciding that the engagement that brews around misinformation is not safe for the long-term health of your community.
If you enjoy our show, please know that it’s only possible with the generous support of our sponsor: Vanilla, a one-stop shop for online community.
[00:00:04] Announcer: You’re listening to Community Signal, the podcast for online community professionals. Sponsored by Vanilla, a one-stop-shop for online community. Tweet with @communitysignal as you listen. Here’s your host, Patrick O’Keefe.
[music]
[00:00:25] Patrick O’Keefe: Hello, and thank you for joining me. Today, we’re talking with Joseph Schafer and Rachel Moran. Joseph and Rachel are part of a team at the University of Washington’s Center for an Informed Public that has been researching how anti-vaccine advocates are circumventing content moderation efforts on Facebook, Instagram, Twitter, and large social networks. They’re going to talk about what they’re seeing so that we can learn what we should be watching for in our own spaces.
A big thanks to our Patreon supporters including Carol Benovic-Bradley, Marjorie Anderson, and Aaron H. If you’d like to join them, please visit communitysignal.com/innercircle.
Joseph Schafer is a fourth-year undergraduate student at the University of Washington studying computer science and ethics. He has also worked as a research assistant for the Center for an Informed Public since January of 2020, studying various forms of online misinformation and disinformation. Joseph hopes to pursue graduate school in information science in order to understand how misinformation takes advantage of recently developed socio-technical systems like social media to influence our society.
Rachel Moran is a postdoctoral fellow at the University of Washington’s Center for an Informed Public. Moran received her doctoral degree from the Annenberg School for Communication and Journalism at the University of Southern California. Her research explores the role of trust in digital information environments and is particularly concerned with how trust is implicated in the spread of mis- and dis-information. Her research has been published in academic journals and been covered by the New York Times, Fox, Vice, and others. She was also an affiliate fellow at George Washington University’s Institute for Data Democracy and Politics and UNC-Chapel Hill’s Center for Information Technology and Public Life.
Joseph, welcome to the show.
[00:01:58] Joseph Schafer: Thank you. It’s nice to be here.
[00:14:08] Patrick O’Keefe: Rachel, welcome.
[00:02:02] Rachel Moran: Thanks for having us.
[00:02:03] Patrick O’Keefe: It’s my pleasure. I want to talk about and walk through the circumvention efforts that you’ve seen. Let’s start at the most base level with something that has been seen in online community since the dawn of time that you refer to as lexical variation. A really simple example is when a word or term is substituted, where something that is adjusted to put in a different character or multiple characters in order to get around the censoring of that word so like the at symbol in place of the a in the word vaccine.
It’s really gone further than that. It’s gone further than just simple replacement into coded language. Can you provide some examples of what you’ve seen, Rachel?
[00:02:40] Rachel Moran: Sure. You may have read there was an NBC article a few weeks ago that was saying how they found this big group of people who were using dancing or other kinds of verbs to mean getting the vaccine. Complete replacement of the word. You wouldn’t know that that meant vaccination unless you were a member of that community and had that institutional knowledge that comes with being a member of that community. We see it on a spectrum. We see simple variations.
Like you said, we see vaccine where the a is an @ sign, or sometimes now we see people just using that vaccine emoji rather than using the word at all. They believe that if they put that instead of spelling out vaccine as it’s spelled in English, that they’ll avoid being caught up in the algorithmic moderation that happens on platforms.
It goes from the simple to the extreme of just replacing the word entirely but that comes at a trade-off of having to be in the group to understand that dancing means taking a vaccine or whatever verb means taking a vaccine.
[00:03:41] Patrick O’Keefe: Yes, the NBC news story mentioned usage of the word pizza in place of Pfizer, Moana in place of Moderna. It’s incredibly, I don’t want to call it clever, but it’s a good choice if you’re trying to get around things because just speaking from the perspective of someone who’s been moderating since ’98, you can’t filter the word pizza. It’s a smart choice to go for the word pizza.
It’s an incredibly difficult thing to detect when, as you say, the word isn’t simply substituted, but it’s replaced with some other form of lexicon. They know in that group they’re not ever going to talk about pizza with pepperoni on it, and they’re always going to talk about pizza that is Pfizer.
[00:04:21] Rachel Moran: It’s difficult as well because it has this almost secondary impact of building community around it, that you feel like you’re in the in-crowd because you know this language and you know how to use it. That’s something we all look for, we look for a sense of community. It has that unforeseen impact of being able to cement these communities and people’s attachments to one another because they’re all involved with this coded language to use.
[00:04:47] Patrick O’Keefe: Many of these larger social networks have blocked direct links to known spreaders of COVID misinformation, but as you pointed out in your response, that doesn’t mean that links aren’t being shared. How are COVID misinformation groups getting around those link blocks. Joseph?
[00:05:04] Joseph Schafer: There’s a variety of ways that you can use to get around that. One might be, for example, using a screenshot of an article or something that is vaccine misinformation, rather than putting in the text, either of the misinformation directly or putting in the text of a link, it could be a screenshot of a link to a problematic article or website or something.
There’s also various different websites like URL shorteners or URL compilers, or even just like a Word document posted online or something like that, where then it’s just a link to a Word document that looks fine, but then if you go into that Word document, that is just filled with other links to sites that maybe these major platforms are moderating and blocking, which does present a real challenge. Again, you’re not going to block every URL shortener. There’s a lot of times that people use that with every URL compiler, and there are good reasons to use those, but then because the platforms that have to unwind those things, it’s harder for them to see that it’s actually being used to spread misinformation.
[00:06:06] Rachel Moran: We find it’s amazing how every tool out there can be used to spread misinformation in a way. We’ll find Google docs that are full of alien conspiracy theories or anti-vaccination “evidence”. It’s almost like you can’t necessarily put the blame at the feet necessarily off these big companies although they’re not entirely blameless in this. People are just very creative and if they want to spread information they will find whatever way possible to do that, whether it’s through one of these link shorteners or a compiler or even a Google doc.
[00:06:40] Patrick O’Keefe: One of the things I found most fascinating about the document that you put out was the mention of Linktree and of link-in-bio type of sites, which is right in front of us, it makes sense, but I hadn’t actually heard that before or seen a live example of it before the screenshot in the rapid response. I’m sure most listeners are familiar, but if you’re not, it’s the kind of website you link to when you open it, it’s a set of links within it. When you allowed one link on Twitter, they might have a link to a Linktree or a similar service where you go and it’s like, “Here’s my Twitter, here’s my Facebook, here’s my Instagram, here’s my merch.” A lot of influencers use these services.
What you found is that that was actually a pretty popular way in some groups to get around link blocks. Again, Instagram isn’t going to necessarily, not yet, block Linktree or block link-in-bio sites because it’s so core to their product because they don’t allow links anyway. It’s actually a pretty popular way to circumvent those direct blocks.
[00:07:32] Rachel Moran: Like Joe said, these link compilers are really useful tools. They’re incredible tools for influencers or even academics use them all the time to be able to say, “Click on this link and you can see everything that I’ve been doing recently, or it will send you to my Amazon affiliate store.” They are very useful tools, but every tool, as we said, ends up being used for misinformation.
Instagram, particularly, is an interesting space, because they’re a walled garden, they don’t let you link outside unless you have a certain amount of followers. If you have 10,000 followers, you can do that swipe-up feature, but for your everyday person or your smaller level influence, you only really have this link in bio to use, so they find a way to put misinformation in there.
Then it’s not necessarily their fault, again, at the hands of these companies but it’s everyone’s job now to strengthen their product against misinformation. It really means that we all have to be cognizant of these different routes that people are taking to circumvent the walled garden and using your tool as a way to do that.
[00:08:30] Patrick O’Keefe: We can criticize those big companies here on Community Signal, that’s what we do. Search my Twitter feed for Facebook. Goodness, gracious. There’s a reason I turned down a job there, now they wouldn’t want me. One thing else about the Linktree thing, I was curious, have you talked to them, or have you seen any sort of response to them about this issue?
[00:08:47] Rachel Moran: We actually have a meeting with them coming up, so looking forward to talk to them. It’d be interesting to see, obviously, this is not the totality of their business, but how much of this is affecting them as well, and what are they are trying to do about it. You can obviously build community guidelines. They have community guidelines on their site so you’re not allowed to spread misinformation, but how do you operationalize that is something that we’re all grappling with, so we’re looking forward to speaking to them and speaking about that issue.
[00:09:11] Joseph Schafer: I just want to add also, with that these problems are nested too, because not only is it a problem. Let’s say that Instagram is blocking links to some known anti-vax content, and so somebody could theoretically put in a link in their bio to that. They could have that nested too, where you would have a Linktree linking to a link-in-bio page, linking to a Bitly.
The more steps you put it in, yes it’s going to be a little bit harder for people to see it, but it’s looking for that content, but it’s also going to be way harder for anyone trying to do that job of moderation. It’s not a simple thing. It’s important work, but it’s also a very difficult problem to figure out how to use when there’s these ways of masking what that content really is to people who aren’t looking for it.
[00:09:56] Patrick O’Keefe: Let’s pause here to talk about our generous sponsor, Vanilla.
Vanilla provides a one-stop-shop solution that gives community leaders all the tools they need to create a thriving community. Engagement tools like ideation and gamification promote vibrant discussion and powerful moderation tools allow admins to stay on top of conversations and keep things on track. All of these features are available out of the box, and come with best-in-class technical and community support from Vanilla’s Success Team. Vanilla is trusted by King, Acer, Qualtrics, and many more leading brands. Visit vanillaforums.com.
You mentioned screenshots before. Is there anything particularly unique that you see people doing with images? I think we’ve seen covering up texts. I never cover up anything on my Instagram. Anything posts, anything mentions, vaccinations, like when I got vaccinated, there’s a bubble at the bottom that says learn more COVID-19 blah, blah, blah. I wonder what the CTR is on that. I feel like it’s fairly low, but who knows? Are people doing anything weird or different with images beyond covering up the keyboards?
[00:10:54] Rachel Moran: Actually, Joey and I were talking about this morning, one of the interesting things that we’re seeing is on Instagram stories, you can post and maybe you did this when you posted your vaccination selfie or whatever it was. You can put that little sticker that says, “Let’s get vaccinated.” Then Instagram collates all of those of your friends that have put a sticker on them and it goes at the top of your story. It’s the first one that you click on if you were going to view all of your Instagram stories.
What we’re seeing people do and it’s, again, a very smart technique is use that sticker so they will get to the front of the queue. To speak when you look at it or potentially overcome any shadow banning that they think that they’ve been under, but then they’ll put a sticker over the top of the sticker or they are like let’s not get vaccinated. They’re not spreading the good message that they are using the tool or the affordance that Instagram’s created in order to game the system in a way that elevates their content as being somewhat authoritative when it isn’t.
It’s extremely clever, use of Instagram, which is very frustrating but we see new techniques pop up every week. We see new ways of using the screenshots or putting different emojis in front of things to block it and the extent to which it works is negotiable. We don’t really know if actually does anything, but the fact is us, as researchers can still see this content so it’s still out there, so there must be something to it or at least it slows down their attempts at getting moderated.
[00:12:16] Joseph Schafer: Another thing with that, another image, things like slowing down, people tending to get moderated. If we look at like Twitter quite often, what will happen is if Twitter takes down or deletes a tweet, you have that same account or a different account that is also anti-vaccine or spreading misinformation about that, will screenshot that tweet before it’s taken down and then repost it, not as texts but as an image of that tweet, because then running tech character recognition on an image is a lot more intensive. It’s not going to come up in the same search results. Those tweets content has already been decided to be violating Twitter’s policies or other platforms too. It’s not like just Twitter is the only one doing this, but that was where we noticed it particularly. Even if that content was violating it then is re-circulating as an image of the violating content, but as an entirely separate artifact.
[00:13:06] Rachel Moran: What’s really difficult when that happens is that, you will often see them tweet, something along the lines of why was this taken down at Twitter or you have this blow back effect where it’s almost more credible now it’s been taken down because it’s evidence that they were onto something that they were doing something right and the man wanted to shut them down. It gets this extra level of martyrdom making it seem more credible than it was in the beginning. There’s definitely these backfire effects to moderation attempts that comes when you allow that content to pop up in different ways in different platforms.
[00:13:40] Patrick O’Keefe: I’ve been the man for a long time. At 13, when I started moderating independent communities, with no money, I was the man then [chuckles] and I’m still the man now.
You mentioned something there. I think Rachel specifically mentioned this. You mentioned things like phrases like “they think” or “believe” and I noticed that when I was reading through your rapid response, that one of the things that you’re careful to say that I find interesting is the word believe, “Users believe that automated content moderation features respond primarily to impose content. Posting more controversial content on the stories feature means it is less likely to be flagged by the platform.”
You refer to the phrase, folk theories from research done by another group to describe these beliefs. The implicit comment there is that just because these people believe that these things circumvent moderation or that it at least to their content being scrutinized less, it doesn’t actually mean that it’s true. Can you talk a little bit about that phenomenon? Why do these ideas that if we do X, if we throw a vaccine emoji on the word vaccine, we can definitely get around the censors? How do people start to believe that thing?
[00:14:48] Rachel Moran: There’s a couple of different parts to this and it’s why we captured in this folk theory theory for academia that actually makes it quite useful. The first thing that it’s not necessarily about, whether it’s true, it’s about whether it’s effective. We all have different levels of algorithmic knowledge, even the people working at the platform, don’t always know what the algorithm is doing. We can’t have a full idea of whether or not the technique actually is true. If you post emoji vaccine, instead of the word vaccine, your content will stay up. We have no definitive evidence either way as to whether that’s true. Instead, we look at it in terms of effectiveness which is why folk theories make sense so folk theories are folk stories. They’re not necessarily true, but they do something they’re useful and so these theories of how you avoid content moderation, we don’t know whether or not they’re true, but at the same time, the content is there and these people are still online and these communities still exist so it’s effective. These strategies work for them, whatever they’re doing are working for them.
The other side of it is this sense of community, which is, again, why the folk theory thing is interesting, is that sometimes we’ll get some definitive evidence as to the reasoning behind. For example, a few weeks ago, we were watching this Instagram story of an account that usually shares anti-vaccination content. They were showing a video of them unpacking their Trader Joe’s groceries and they said at the beginning of the video, “I’m going to show you my Trader Joe’s groceries, I’m going to post some normal content to avoid moderation. They gave us like some particular evidence as to what their thinking was behind this. Then that goes through like that telephone rumor experience where people go, oh, so that’s the thing, so they start doing it and it ends up building within a strict community. Other people who share anti-vax content, then we’ll say, okay, if I post this picture of where I got my jeans, and then I post this link to this projected study on why vaccine shedding is a thing, that mixture means I won’t get moderated. There’s a sense of community building as to why these things spread.
[00:16:50] Patrick O’Keefe: Yes. We had a Dr. Jennifer Beckett on the show recently from the University of Melbourne and one thing that we discussed was how the word toxic, we think of that in one way, like communities of, let’s just throw an extreme example, Nazis, that’s a toxic community, but who are they toxic to? They’re not toxic to each other. They’re actually quite friendly, and they’re not going to tell on each other, and they’re going to protect each other.
They’re pretty healthy community to themselves. It’s the people outside the group that they’re toxic to, and similar, the way these theories spread and how people share them and who’s seeing these things, who’s propagating them. Will they tell on each other, or are they going to report, that they saw this content? Well, if they have the right group, if they’re careful about how they spread that initial seed, how strong that initial group is. Maybe not. I’ve heard a lot of folk theories. I once had someone tell me, I banned them because they use the Opera web browser.
[laughter]
That was a funny theory to me. It’s such a fascinating thing. There was a story in the New York Times about Joseph Mercola that you were quoted in, Rachel, where he talked about how he’s going to write articles on his website and then delete them after 48 hours. I just don’t know. I get the idea, I get the premise, but it’s still indexed by Google for 48 hours.
It’s still on your website 48 hours, your ability to be ranked or de-ranked or shadow-banned which is such a goofy phrase like it’s overused, is not necessarily going to be impacted by you taking down in 48 hours but as you noted, it’s not necessarily even about that. There’s some things going on with martyrdom and how he plays to his supporters that can tie into spreading his message a little quicker and faster and a little larger scope.
[00:18:24] Rachel Moran: Going back to what Joe was saying about screenshots. It’s like, he’s using that strategy because he asked within that blog post that he wrote, saying he was going to take this content down after 48 hours, he asked his followers to download his stories and to re-share them and create this whack-a-mole moderation effort because then, rather than coming just from one, we call them at the CFE, super spread or repeat offender. It’s coming from a whole host of places, the same piece of content.
You can screenshot it and share it somewhere else. Again, very smart strategies, very good ways of building resilience to the community guidelines that are ever evolving, but a cat and mouse game of trying to keep authoritative content up and take down some of this misinformation.
[00:19:08] Patrick O’Keefe: It reminds me of something that I’ve seen too, which is almost like a reverse of this strat, it’s when someone posts something and then waits until moderators have maybe seen it, or they think it’s past filtering and maybe it has past filtering of some kind, and they come back days, weeks, months, I’ve seen years later and edit that post after as long and accurate to insert some link or some message that may or may not be relevant in a vaccine conversation. They could come back and add something that’s relevant to that conversation, but not something that would be appropriate within the community guidelines, but by then, most people have moved on.
However, it’s still indexed by search. It’s still being found by people through search is still being discovered as new content even if no one responds to it or bumps it back up in any way. It’s almost the reverse long game. There are these groups that have these network of online contributions, forum, social media, elsewhere, that have all these accounts that make these posts and then wait because they accrue social capital over a long period of time, and then they come back and decide when they’re going to flip the switch and start things into those contributions.
[00:20:09] Rachel Moran: That’s a new one, we’re going to have to start looking out for and add to our list.
[00:20:14] Patrick O’Keefe: Not everyone takes the long game. I bump across this somewhat regularly, but then again, I have managed one community, smallish community for 20 years, so I do see things after a couple of years and I’ve noticed, “Oh, this edit’s here, this goes there.” It makes you make better tools. If someone edits a post, if they’re still able to edit it, because there’s such a thing as timed edits in communities, but if they’re still able to edit it after a period of time, maybe that’s a weird enough behavior that gets flagged to a queue where you say, “Oh, Frank edited this post after one year. Maybe you should take a peek.”
There are totally legitimate reasons to edit things. People have long-term conversations in forums over years. People document challenges. I have a member who’s going through cancer. He has a thread that’s been going on for a year-plus now about his cancer and his journey through that. Maybe the first thread gets updated because, obviously, his journey is changing. People take advantage of that spread of that to spread misinformation.
[00:21:08] Rachel Moran: It’s such a hard balance because you want to offer up as many affordances as possible within your product or your site to make it useful for the people that are there, to make it a safe and effective space for them, but you open the door also to people who are going to abuse that to spread misinformation or to engage in toxic behavior.
It’s always going to be a balance how do we either build in community guidelines or curtail the use of some affordances if we feel like they tend more towards the abusive rather than the productive to make sure that we’re always towing the line of having these X amount of places to convene and talk about difficult things without them being flooded with information that will undermine that.
[00:21:50] Patrick O’Keefe: Prior to working on the Virality Project, you both worked on the Election Integrity Partnership which looked at disinformation and misinformation around the 2020 US election. Are you seeing any overlap there in the groups that are involved in electing misinformation and COVID misinformation? Is there any kind of duality there? Is it too early to tell? Is there any crossover between those two groups?
[00:22:12] Joseph Schafer: It’s not the same groups. There are some that were spreading misinformation about the 2020 election that are spreading about vaccines, spreading anti-vaccine content. There’s also going to be people that were only spreading anti-vaccine content, but didn’t touch the election, and some people that only touched the election and aren’t spreading on anti-vaccine.
There’s also the chronological issue of, not all by any means, but a decent number of the very prominent accounts that were spreading 2020 election conspiracy theories aren’t active on the platforms anymore. There’s a lot that still are, but for a big example, even if @RealDonaldTrump wanted to spread anti-vaccine misinformation on Twitter, that’s not an option on that platform anymore. There’s definitely some overlapping communities. I think the extent to that is definitely something I would feel confident saying how much that is.
[00:23:03] Rachel Moran: Especially my role within the Election Integrity Partnership was more on the Qanon side, they had more conspiratorial communities who already were plugged into the anti-vaccination conversation either through the fact they had microchips in the vaccine or that the vaccine is all one big ploy to usher in The Great Reset. There’s a lot of conspiratorial content that goes into that vaccine movement.
We do see that crossover, especially some of the Qanon who in our theories we tend to rely on them a lot for examples because they’re incredibly savvy social media users. They were influencers or they just know these spaces like the back of their hand. They end up creating these techniques to avoid moderation because they’re such prominent users.
We’ve seen a fairly substantial amount of those prop up who were spreading misinformation around the election or QAnon adjacent, and then now spreading anti-vaccination misinformation. In all honesty, they were spreading anti-vaccination misinformation before. [chuckles] You can only have so many hours in the day and only so many Instagram stories. You focus differently on different crisis events as they pop up.
[00:24:12] Joseph Schafer: Even if the communities aren’t necessarily the same, we do also see a lot of the techniques are the same of using hashtags. One thing we saw was vaccine neutral hashtags being used to spread COVID misinformation. A hashtag is just like #COVIDShot, but then these anti-vaccine influencers will send articles or things that are very vaccine-opposed in these vaccine-neutral or pro-vaccine hashtags.
The same sorts of things would occur with QAnon or with the election thing, too. There is the whole #SaveTheChildren. It’s first seen very innocuous, “Let’s fight human trafficking,” but it was co-opted or completely taken over by QAnon influencers who were all talking about how there’s a global conspiracy to traffic children. Even if the users are not the same, a lot of times, there are similar ideas of these false theories, similar avoidance strategies because they’re all working to get their content out that they believe as opposed to what the platforms want there or against the community guidelines.
[00:25:11] Patrick O’Keefe: @RealDonaldTrump has created his own circumvention tool which is that he puts out statements. Then who will carry the torch for him by screenshotting those statements and putting them into our Twitter feeds? Which is its own massive headache. Building on that a little bit, I think the last thing I wanted to discuss was, well, first, obviously, moderation, trust and safety is a thing that is studied deeply on its own, it’s its own field of work with deep challenges a new one.
That aside, in your rapid response, you mentioned that, I’m going to quote directly, “Actions taken by platforms to remove COVID-19 vaccine misinformation fail to counter the range of avoidance strategies vaccine, oppose groups deploy. We recommend the moderation efforts to remove vaccine misinformation, look beyond content-based strategies to consider the accounts and communities that create and deploy moderation avoidance strategies.” Let’s talk about that. Let’s expand on that. Looking beyond content-based strategies to consider the accounts and communities. Unpack that a bit. What are the content-based strategies and then what do you think might be helpful in shifting from those?
[00:26:11] Joseph Schafer: There’s a lot of different similarity of time, I think, the strategies that you’re using, but with large communities around particular conspiracy theories, or disinformation or other things that are violating the community, there are these norms of what we were talking about with pizza earlier or with- then the whole dancing similarly for the vaccine. That you can’t do that effectively monitoring just searching on the word vaccine, that’s not going to be an effective moderation policy.
What you need to have a strong enough moderation infrastructure in place so that you can have people be embedded in a lot of these different conversations and learn that this group that’s talking about dancing with Moana and pizza is actually spreading a whole bunch of very anti-vaccine content. Trying to embed in those communities and figure out what is and is not violating platform policy, would be a lot more effective to actually approach what their goals are.
[00:27:07] Rachel Moran: It’s a really difficult task, as Joe was saying, this content-based approach means that you often miss so much of the content that is out there that is using these techniques really effectively to evade whatever the algorithmic moderation or human-driven moderation you have, but everything has a drawback. If we advocate for this more embedded approach, which would be able to capture the pizza language, or dancing or whatever, it tends towards the more surveillance side of things, which is also difficult. It’s not something we want to advocate for as the way forward for moderation either. There’s always going to be a need for balance in one of these things.
What we found in our election-related projects, but also in this vaccine-related project is that we track misinformation almost as if it goes across a network. There are certain people within that network when that information hits them. It just goes like wildfire. They have a certain amount of followers, they have a certain amount of social media savvy, they have influence within the conversation to really spread misinformation far and wide. Really the job of moderation is to focus in on those people.
I can’t begin to think of what techniques strategies would work to curtail that person’s ability to spread misinformation. I think that’s a complete bag of worms, but approaching moderation or putting together a moderation framework, where you identify repeat offenders would help you at least become a little bit more aware of the tactics that we were identifying, but also how these communities work. What is the purpose of this community?
We find anti-vax influencers, they’re not just spreading anti-vax misinformation or vaccine opposed information. They’re also spreading stuff that should be online, whether that’s health and wellness tips, or just everyday content. It’s really difficult to advocate for a strict moderation recommendation, but we’re just hopefully with this piece trying to open up people’s minds to what we don’t know, is that we feel like we have a grasp on what the anti-vaccination content is. Given that there are all these strategies to avoid moderation, it also avoids detection from researchers.
There’s plenty of conversation out there that we’re not privy to. I think it’s just trying to make sure we don’t overstate our knowledge when we’re trying to think through what these frameworks or strategies for moderation look like.
[00:29:19] Joseph Schafer: Also, this is a non-exhaustive list, too. We tried to find as much content that was using various different avoidance strategies as we can, but these are the avoidance strategies we were still able to find. Maybe there are some amazing avoidance strategies that none of us caught either or new ones are being developed though, building these awesome regular expressions to cat different variations of vaccines or looking at images or building character recognition things and that sort of thing. It’s not going to be sufficient, because then there’ll be other techniques that are then evolved after that.
[00:29:51] Patrick O’Keefe: I totally get that. I think that’s fair. I think that as someone who is knee deep in tools and all those things, there are things you can do. We’re just willing the next good idea or the next implementation of an idea where we cut down on something in a substantial way. Of course, there is a point at which platforms simply don’t want to.
Jay Rosen, a professor of journalism at New York University, I had him on the show, and he talked about how he feels that their inability to moderate effectively in cases like this is really due to a lack of principles, they just don’t have strong principles, speaking specifically of Facebook with which to adhere to and because they lack those principles, leaves them open to being taken advantage of. That’s one take.
I hear the surveillance thing like if you told me someone was embedded in my Facebook group, I don’t like Facebook Groups, terrible product, I don’t use them, I would never recommend them. That’s weird, but what’s not weird is on an analytical side looking at groups that have a certain amount of reach all of a sudden popped up that day and seeing what those groups are up to. Things that hit your dashboard, numbers that are weird, anomalies in the data, those are the kinds of things to pay attention to and look at.
Mercola has a verified YouTube account. His announcement about the 48 hours is on YouTube. Does Google care? How much does Google care? There’s all these sorts of things where if you just cut down on one thing, we will never eradicate it. The goal of moderation in my view is never to be perfect, it’s never to be 100% because we’ll never get there, that’s not a reachable goal. Our goal is to make it a little better every day. One of the ways that we do that is by cutting down on some of the reach to some of these really massive bad actors.
I found Twitter to be a so much better place to be for me personally when Donald Trump was banned. I felt the impact on that. I felt the impact on the information entering my feed. Did it fix the problem to some conclusive degree? No, it’s still out there, but the quality of the feeds got better, in my view, at least from my personal experience. There are just so many things you can do and you can try and you can attempt. I just think they’re often bad at moderation, frankly. I file a reports that I think are clear violations. Not to say I’m right, but I’m surprised sometimes by how poor their responses are, and how little I get back out of it, and how things stay online.
[00:31:49] Rachel Moran: Reacting to that, I think there’s sort of two things, the incentive structure for platforms to actually pursue misinformation and trying to end misinformation in a concrete way it’s just not there. At the end of the day, profit-making institutions who are offering a product, and part of the problem with misinformation is that it’s really engaging. When you’re making money off of engagement, there’s only so far you’re going to go to take down misinformation without going too far into your bottom line.
We really do struggle. I feel like there is a tide-turning moment happening where at least the bigger platforms are realizing that misinformation is a vulnerability that degrades the product that can have economic disadvantages. We’re hoping that some of the smaller communities or the people who graft that tool onto these bigger social media platforms will also follow suit and realize that misinformation is part of their business plan, and they need to address that.
[00:32:45] Patrick O’Keefe: It’s been a pleasure to have you both on. Thank you so much for spending time with us. Joseph, thanks for coming on.
[00:32:49] Joseph Schafer: Thank you.
[00:32:50] Patrick O’Keefe: Rachel, thank you as well.
[00:32:51] Rachel Moran: Thank you for having us.
[00:32:53] Patrick O’Keefe: We’ve been talking with Joseph Schafer, an undergraduate research assistant at the University of Washington’s Center for an Informed Public, find him at josephschafer.net and on Twitter @joey__schafer. Schafer is S-C-H-A-F-E-R and Rachel Moran, a postdoctoral fellow at the University of Washington Center for an Informed Public. Visit their website at cip.uw.edu and follow Rachel on Twitter @rachelemoran, Moran is M-O-R-A-N.
For the transcript from this episode plus highlights and links that we mentioned, please visit communitysignal.com. Community Signal is produced by Karn Broad and Carol Benovic-Bradley is our editorial lead. Stay safe out there.
[music]
If you have any thoughts on this episode that you’d like to share, please leave me a comment, send me an email or a tweet. If you enjoy the show, we would be so grateful if you spread the word and supported Community Signal on Patreon.
One comment