How The New York Times is Building Thoughtful Comment Sections in the Trump Era
Over the past few months, Patrick has spoken to several leaders in the world of journalism and for this episode, we’re welcoming back Bassey Etim, community editor at the New York Times. Bassey was originally on Community Signal in December of 2015 and it’s overwhelming to think about how public perception of the media and the Times, in particular, has changed since then. To give you some context, Barack Obama was still in office at the time of that interview and Donald Trump had yet to win a primary.
Patrick brings up an important question during this conversation: How are moderators at the New York Times doing? And perhaps that question can largely be answered by how Bassey manages his 14-person team. He shares how the team blows off steam, what he does to advance people within his team, and how he views AI as a human-powered tool to moderation, not a human-replacing one. Is it Bassey’s emphasis on people and objectivist journalism that powers a positive environment amongst his team and the comment sections at the Times? I think so! Bassey also shares:
- The impact of the midterm elections and politics in general on moderators at the Times
- His own career path at the Times and how he elevates others for growth opportunities
- Getting AI machines to ask humans for help
Our Podcast is Made Possible By…
If you enjoy our show, please know that it’s only possible with the generous support of our sponsor: Higher Logic.
Big Quotes
On positively fostering comment sections: “We want to make comment sections [that] people want to read. There’s two elements to that. One is that the comment sections are interesting and good and reflect modern society. They don’t feel like they’re in some weird cloistered bubble. The other is that you feel safe, you don’t feel dirty when you read them.” –@BasseyE
On using automation to create transparency around moderation decisions: “It’s really interesting to see how we could [use automation to] educate people along the process of submitting content to make sure that they have the highest probability of being approved. Everyone loves that automation. Members don’t want to have their content removed after they spend time on it. Moderators don’t want to remove content and have to talk to the member or deal with whatever the fallout of that is and then have to readjust or re-approve it once they try to submit it again. That’s a perfect example of automation done well that everyone would like.” –@patrickokeefe
On not repeating the industry’s outsourcing mistakes: “The really important thing for us in the industry is probably going to be avoiding that old tech problem which is that there’s this piece of technology and it can solve everything. … The technology is really a tool. Just because you have a good tool you can’t just have one person wield it. … You’re going to need a lot of people using the same tool to truly be effective.” –@BasseyE
On machine learning’s biggest problem: “If you let machine learning models operate in the wild without trained human intervention, what you’re going to be doing is perpetuating a cultural cycle of silencing certain people’s voices, filtering out a critical mass of people from certain communities. Wouldn’t it be quite ironic if the thing that’s supposed to save us all, technology, only winds up taking all the human biases and codifying them into code so that, unaccountable executives can say, ‘Oh, well, the model does this. We’ll try to fix the model sometime. We just don’t understand the complexities of the model.'” –@BasseyE
About Bassey Etim
Bassey Etim is the community editor for the New York Times, a novelist, and a musician. He’s currently putting the finishing touches on his second book and planning for his wedding next year. Bassey is a first-generation American with roots in Nigeria. He’s from Milwaukee, Wisconsin and graduated from the University of Wisconsin School of Journalism.
Related Links
- Sponsor: Higher Logic, the community platform for community managers
- Bassey Etim on Twitter
- Bassey Etim on LinkedIn
- The New York Times, where Bassey is the community editor and product manager for community
- Bassey’s first appearance on Community Signal, in December 2015
- “I Was a Paid Internet Shill” Happens and Veteran Community Managers Know It
- Sara Bremen Rabstenek, product director at the New York Times
- Perspective, Google’s comment-screening API
- The Times is Partnering with Jigsaw to Expand Comment Capabilities
- New York Times: Using AI to host better conversations
- Quartz
Transcript
[00:05] Announcer: You’re listening to Community Signal, the podcast for online community professionals. Sponsored by Higher Logic, the community platform for community managers. Tweet with @communitysignal as you listen. Here’s your host, Patrick O’Keefe.
[00:28] Patrick O’Keefe: Hello and thank you for listening to Community Signal. Our guest this episode is Bassey Etim, community editor for the New York Times. Fresh off the first ever US midterm election to cross 100 million voters, Bassey returns to the show to talk about the political climate and how it has impacted his team, the pros and cons of machine learning in moderation, and what comes after the ROI of your community has been realized and accepted.
Thanksgiving is right around the corner and here on the show, we’re thankful for our supporters on Patreon including Luke Zimmer, Maggie McGary, and Jules Standen. If you like to join them, please visit communitysignal.com/innercircle for details.
Bassey Etim is the community editor for the New York Times, a novelist, and a musician. He’s working on quite a few creative projects right now including putting the finishing touches on his second book and getting married next year. Bassey is from Milwaukee, Wisconsin, a first-generation American with roots in Nigeria, and a graduate of the University of Wisconsin Journalism School. Bassey, welcome back.
[00:01:21] Bassey Etim: Thanks. It’s great to be here. I really appreciate it.
[00:01:23] Patrick: Yes, and congrats again on the engagement.
[00:01:25] Bassey: Thank you, sir. Thank you. I am pretty excited about getting this wedding over with.
[laughter]
[00:01:25] Patrick: You’re the first person that I’ve had back on the show for a one-on-one, and you were on the fourth episode of Community Signal which we recorded on December 16th, 2015.
[00:01:43] Bassey: Was it really that long ago?
[00:01:45] Patrick: Yes, almost three years ago, right, we’re about a month short, but at that point, it was. I don’t know if it feels like it’s been three weeks or 300 years, basically. What’s happened? I feel like it’s both fast and incredibly slow in a lot of different ways. Barack Obama was still president for another 13 months. The Republicans had not yet even held the primaries. Donald Trump was still three months away from winning his first primary. You weren’t the enemy of the people in the words of our president and this was before so many attacks on the media.
[00:02:14] Bassey: Perhaps, I was the enemy of the people, but nobody had realized it yet, you never know.
[00:02:18] Patrick: [laughs] Well, yes, I don’t know. I didn’t know it then. So many, and I don’t mean to laugh because it’s not funny. There are so many attacks on the media. Greg Gianforte attacking a reporter and then winning a house seat, a series of things, the Capital Gazette shooting, the pipe bombs mailed to CNN. I live a couple of blocks from CNN Los Angeles where they had received this suspicious package or one was addressed to. That person who sent them on his list of victims, he had a New York Times editor. I’m sure I’m excluding other things as well. Three years has gone by and so much has changed.
I wanted to have you back on this show to take stock of where we are, what’s changed, and how it’s impacted your work specifically leading the community desk at the New York Times, and I’d like to kind of take that in two ways. The first way is how it’s impacted you internally and your team. The other side is, how it’s impacted your community of readers and the New York Times comments in your eyes, if it all. Let’s start with the internal side of that. What does it look like for the last few years for you and your team? How has this wave of change for better or for worse impacted your team?
[00:03:22] Bassey: I would say that we were pretty fortunate that we’ve spent a lot of time over the years trying to create a document that could account for the way society changes so that as public discourse changes, we don’t have to rewrite our rules every time for the new normal. That said, we didn’t anticipate this. Really, we had to rewrite some of the rules. It was really more of a process of addition, really. I think, for us, there’s two main things. The most prevalent thing on our end has certainly been the way people talk to each other. We’ve always had a rule about you can say what the Times says, and you can reflect what’s going on in the news, what public figures say.
If something is said in the article or an article takes a certain tone, we allow the readers to respond to the article in that particular tone. Now, if we’re writing an article and the president is insulting people either for the way they look or for other things, then you come to a decision where it’s, “Are we going to allow–” If somebody says, “That’s so insulting, Mr. President,” and somebody else says, “You’re only saying that because you’re just one of those people whose looks the president when liked based on your avatar.” It’s like, “Okay.” That’s literally a conversation that’s happening in the public sphere right now. If we’re talking about the original rule, that’s legit, that’s the public conversation, you’re reflecting it, you’re reflecting something the president might really say.
[00:05:07] Patrick: When the president says something, it becomes okay. That’s basically what the rule is.
[00:05:11] Bassey: Right, but, obviously, from our end, we’re the New York Times and I think the overwriting thing, the purpose for that rule was that, well, number one, we want to make comment sections people want to read. There’s kind of two elements to that; one of them is that the comment sections are interesting and good and reflect modern society. They don’t feel like they’re in some weird cloistered bubble. The other is that you feel safe, you don’t feel dirty when you read them.
Maybe we’re sacrificing a little bit of the first, how much the first, is a point of argument, but we didn’t want to sacrifice the second thing. We didn’t want to sacrifice the feeling of that you could feel safe, that your arguments could be respected for their arguments, not ad hominem type stuff. We’ve updated our rules from that standpoint to make clear that there’s a line in terms of what we’ll allow people to say. I do think, look, if we were to run a column saying, “All liberals or conservatives are idiotic and you’re all racists and you don’t know how to think, your brains are goo”.
[00:06:18] Patrick: [laughs]
[00:06:18] Bassey: If the commenters came out and said to the author of the piece, “Your brains are goo.” repeated those things essentially back to them and be the opposite ideological way, we’re still going to allow that. We’re not saying that we’re going to be like just because public figures are insulting each other in that way in America, just because that’s the discourse you’re seeing on a lot of TV news, that doesn’t mean that we’re going to allow people to talk to each other that way.
That’s the one big thing. The second thing which is actually a little bit more complicated thing has been misinformation. Prior to the administration, when we had a pretty simple rule which is, we don’t fact-check comments, but we don’t allow conspiracy theories. If the comment is a conspiracy theory and our people are we don’t fact-check everything, but our people are trained on what conspiracy theories are, we’re very well aware of the news, we’re aware about the Times reports, we’re a team of journalists. Then we don’t allow those things.
There, there’s a lot of room for arguments. If an article is about a conspiracy, then we gave it a pretty wide berth to say, “Well, you can argue in favor with it if it’s an article about it.” that kind of thing, but we won’t allow it to infect our pages. We don’t want to delete points of view either but we’re at a point where the president is saying something that is a conspiracy theory, then we can’t say that it is not allowed in our comments to support the president.
You can’t run a comment section that has anything to do with American politics if you can’t allow people to support things the president says. We do that. We’ve changed that rule and said, “If the president says it, you can say that thing”. The way that we’re cutting it internally right now is really you can say that, sure, but we’re not going to allow people to cite false evidence of fact. We try to define topic as carefully as we can on an article to article basis, so it doesn’t infect everything.
In an article about elections, you can say, “That’s just because four million Mexicans voted illegally, they were vanned up from Mexico and buses and they all voted illegally. That’s why the popular vote margin was the way it was in the 2016 election.” If it’s a story about election results, you can say that, but you can’t say, “Here’s the link to the video of the people doing that and then it’s a link to Infowars or something like that.
We won’t allow people to cite the evidence of conspiracies, but they can make them as bold assertions because that’s where our politics are. Now, that’s not a particularly clean way to cut this, it’s not particularly satisfying I think, but it’s just the only way we’ve been able to figure out to where we don’t want to have a comments section that’s just like a sanitized hospital ward. That’s not what people come for, they want to debate the issues important to society.
That’s how we’re letting them do it right now.
[00:09:15] Patrick: I don’t know that it’s a thing that can be cut cleanly, I think this is something that a lot of communities are wrestling with, big and small, because as you alluded to. There was a time when, for the most part, moderation will be spent on conduct that if someone’s being disrespectful if someone is making someone unsafe, uncomfortable, they’re being rude, disrespectful, and tolerant.
It’s a little easier to find a standard of what that is and then adhere to that standard. Now, I would say because of how our systems, Facebook and other systems, but Facebook certainly takes the most press, were used to push information that impacted the election. The resulting media coverage of that, now, you have a lot of communities, smaller ones, bigger ones, like, I run a community to martial arts. How much do I really need to spend moderator time on not just ensuring respect, but also it’s all of this factually correct especially with a team of volunteers.
It’s one of those things where everyone is now wrestling with the quality, not even the quality, with the accuracy of the facts that are stated where it’s funny, because to some degree, I think we’ve known information wars, not the website, but the idea, of information wars and how those are waged. Once upon a time, people would drop pamphlets from planes over other countries, and that’s a form of information warfare. Now, it’s done online.
I don’t think anyone was shocked and it was already going on. They’ve been talking about online. I had written a blog post about it, I think three, four years ago, this story that came out about all these people sitting in basically, cubicles in an office commenting online to push things to a certain direction. It’s now in the spotlight and now something that everyone is wrestling with. I don’t think there is a way you can cut it cleanly.
[00:10:55] Bassey: I think that’s to Alex Jones’ credit focusing his show around that concept of an info war, of a coming info war was quite prescient really.
[00:11:05] Patrick: With all of the constant attacks in the media, how do you stay sane? How do you keep your team sane? What do you do self-care wise to keep people fresh and keep them from just hating their job or getting too jaded? Because moderation is already a tough, tough, tough job and now, you lay this and it could be any number of things. People who work in mental health communities have a different layer of toughness. People who work with children, child communities have a different layer of toughness. There’s community and then there’s higher risk areas, which I never would have considered the media before, say 2016, 2015.
It’s challenging newspaper comments were widely regarded in a poor light. Now, there’s a whole new level of hostility that is projected onto mostly journalists, but it drills down to the organizations I’m sure. Sorry, just to provide context.
[00:11:52] Bassey: Yes, you’re exactly right. I don’t know if I do a particularly great job at that, frankly. It’s something that’s been on my mind more and more, especially talking to folks on the team. Obviously, me, I take shifts and do the work myself. What I try to do, what I try to allow more and more is a little bit more, more just venting about things. I don’t know if it’s smart, even though it’s team of journalists, if it’s smart in this environment to operate in a place where your people feel like we’re moderating the comments objectively as we can, but even in our conversations with each other or in our internal forums, to pretend that they’re not bothered by any of it.
I think having the opportunity to vent when you’re dealing with this stuff is so important. It’s different in this way from somebody reporting on the ground, which is that when you’re reporting on the ground, when you’re reporting a story, and I’ve been there before, there’s a lot of tough stories you work on, but the story is going to end, and you can step away for a little while. You can take a break between source calls, there’s all kinds.
When you’ve got a thousand comments piling in on something that deeply offends you. If a sexual assault victim being insulted and just things that really mean a lot to people for a lot of different reasons, it’s important to allow people to actually I think talk about them. I’ve tried to encourage the team to do that, and I actually do think that that does help people’s work be more objective. If you feel one way or the other politically, I think there’s this natural mental urging to take some action. That’s what you’re trained to do as a human being when you see something wrong.
When somebody drops the clothespin, you’re going to want to reach toward it and pick it up for them and give it back. If you’re just alone doing it, you’re not talking to anybody, venting to anybody about it, I think that’s the thing that would tempt you to be a little bit of an activist in the moderation decisions you make. If you’ve got folks to talk to you about the issue, the stories of the day in real visceral ways, then I think you can feel a little bit more like, “Okay, that was my venting. Now, I’m going back to being like an objective researcher here, checking the boxes, and performing this anthropological study.”
Hopefully, that’s the environment we’re fostering. Whether that’s completely been effective or not is actually going to be a pretty major point of the performance, the yearly review period that I’m going to embark on soon.
[00:14:33] Patrick: It’s an interesting point that you have an outlet, makes you less prone to abuse of your role or of your responsibilities or a more equal application of your responsibilities. Everyone wants to catch someone saying something they shouldn’t, especially the media. You don’t have to say anything. There’s a guy who has my last name running around, recording people with a camera in a bag saying something and twisting it to mean this, to mean that.
Of course, there are some nefarious characters here and there, but the idea that we are somehow not human or don’t have human feelings, or moments of like, “This person was a jerk. This person is crazy. I had to sit with them for an hour. I had to email them and write an email over two hours. I had to talk to someone else in this department. We had to review this. I had to send it over to legal. This person is ridiculous,” and having those feelings of, “Not everyone’s great. I don’t love everybody.”
It’s so important to be able to share those in a private, secure way whether it’s your Slack chat, if you’re a remote team like I am, and how I manage my team, or you work in an office and you’re in person, you have that session. Even if you make time for it, like, if you block out half hour, hour every Friday to come in and say, “You know what? What was bad this week? What was crazy? What was the silliest thing you saw?” Then just let people riff on that and laugh and move on.
It totally compartmentalizes those feelings and puts them here and says, “You know what, those are real. I express them, it’s frustrating.” Now that I’ve done that, and what you’re saying is, if you don’t have that outlet, you’re going to take it out somewhere. It could be your job. It could be your personal life. It could be driving. You could drive aggressively. It could be any number of things. You need to have that outlet so that you don’t spend it in some way that will be counterproductive to your life or to your work.
[00:16:23] Bassey: I don’t want folks going home just bothering their spouse every day over dinner for hours and hours about comments.
[00:16:32] Patrick: Keep it in-house. Don’t punch people on the subway.
[00:16:35] Bassey: Exactly. One of my responsibilities as an editor though is to make sure that the things that I say in public make clear to people that they can expect fair treatment from me no matter what their point of view is ideologically. I try to do that honestly even in conversations with my friends. I’m not going to say I’m a guy who has no political views at all. I think it’s so important for me especially as the person who’s obviously not the ultimate authority of the New York Times.
[00:17:08] Patrick: The overseer of the comments.
[00:17:10] Bassey: New York Times is not run by some 31-year-old guy. As somebody who’s seen as an authority on these things, I found it so important just take a look. Obviously, I have political views, but it really is an ideological view of mine that for us to have a positive impact on society, we have to find a fair way and ideologically neutral ways to deal with the submissions coming in. Maybe ideologically neutral isn’t the right word, maybe politically neutral is the right word. We certainly have an ideology here at the Times, which is that good conversation looks a certain way, good conversation is high quality, good company, all those kinds of things.
That’s what we try to do. I always encourage team they may not go as far as they do with me like talking even a certain way with my friends. I do try to make sure that the activities on social media- we try to follow all the newsrooms here. It’s a tough situation. You’re right when you talk about the ambush people and stuff. How can you not be aware that that’s possible? That said, you can’t let that determine what you’re going to do.
If you’ve got a fair heart, then anybody could take any video of somebody saying twisted. I think it’s pretty clear to people who are partisan in that way what somebody means when they’re talking.
[00:18:34] Patrick: I think that’s fair. I think in the media, we have a mutual friend David Williams of CNN and he’s exactly the same way. If you look at his feeds, he’s sharing things and there’s news and information, but he’s very politically neutral. I think that makes perfect sense for someone in your role. Really, what you’re talking about is something that beautiful called avoiding the appearance, avoiding the appearance of bias, not just because it’s words and actions, not just actions behind the scenes you’re doing a great job, but you’re actually showing people by the words you say that you’re someone that could do a good job.
That’s something that is just so dead right now. I’m amazed sometimes that some of the stories that come out, especially about the president and just the idea that the timing of things and the appearance of things. I used to be something that do. You used to want to avoid the appearance of bias or of something because you wanted people to trust you. Now, there’s just this level of extreme, I don’t know, transparency isn’t even the right word, but just like, “I want this thing, therefore, I will do this thing now.” Whether it’s firing the attorney general the day after the midterms. It’s like, “Just wait a week.”
It’s just it’s so silly to me how there’s this immediate nature of things and it’s just dead. Avoiding the appearance is dead. It’s good to see that it’s not fully dead and in some professions need to observe it. Because I was someone, if you check my social media feeds before 2016, you would see no political comments at all because I honestly was not- I was involved but I felt like, “Okay, my job is to build communities.”
I bring people together that are political opposites. I bring people together on the martial arts, or about sports, or about technology, or whatever it is. I used to be like that and I was, “Oh, man, it’s tough now”. I don’t know what it was just the rhetoric, the friends of mine that were impacted by it. It’s tough so I still think that’s very important for people in your role though to maintain that balance.
[00:20:06] Bassey: You know you bring up a good point because it is hard to imagine outside of exclusively partisan community, or specifically single-issue oriented community. What is the community manager role where somebody would be better off having strident political opinions being posted publicly. Obviously, we’re not being stupid here, right? Look, as a black guy you are used to a lot of time people making generalizations about.
I’ll say this. If you really type in all of my information and do some kind of demographics calculator that spits out who you’re likely to be. You can know what’s somebody’s political opinions are by their demographics. I’m no different there and I would think nobody’s on the illusion that I have one kind of politics versus another or anything like that.
I do think that what you’re talking about in terms of the appearance of things is so important because I think it’s a show of respect to people. It’s to show that I’m not doing this for myself personally, I’m doing this because I believe in this model of news-gathering. That’s my ideological view. I believe in the objectivist school of journalism. I’m not saying that the activist school can’t work. Of course, it produces great journalism all the time but for me, personally, I really do believe in it. Part of believing in that isn’t to pretend that everything’s just as bad as everything else if I don’t believe it but part of it is to respect people and to listen to them and not be yelling at them while I’m committing acts of journalism.
[00:22:00] Patrick: Right, right. I want to get off politics. Can you talk a little bit about what the midterms were like for you in your role?
[00:22:06] Bassey: Yes. I would say, I think in the lead-up especially, I think the biggest thing to combat against is the existential panic of so many of our readers both right and left infecting you as a person and making you stressed out. There were few moderators who came to me and said they were having bad dreams about politics one way or another in the run-up to the midterms just because the drydency people that you’re reading all day, it infects your mind and the things you’re reporting on based on the things people said in the comments and the different research about it. It infects you. I think that was really the toughest thing and it’s really tough to avoid that especially doing so much work prior to the midterms, that existential kind of panic people on both sides have about this.
The comments themselves though, I thought people were pretty well-behaved. I think people were fine. Our readers are amazing, so they were great about everything at the end of the day. I think most of our readers really understand what we’re going for. I think the readers can understand if somebody reading 5,000 comments a day or something like that is going to come away from the experience feeling a little bit rattled just because of the intensity of everything.
[00:23:24] Patrick: Yes. We’ll see how it ends up with the recounts too. Comments aren’t quite over yet. We’re still going.
[00:23:29] Bassey: Right. Yes, exactly.
[00:23:32] Patrick O’Keefe: Let’s take a pause here and talk about our excellent longterm sponsor, Higher Logic.
Higher Logic is the community platform for community managers with over 25 million engaged users in more than 200,000 communities. Organizations worldwide use Higher Logic to bring like-minded people together by giving their community a home where they can meet, share ideas and stay connected. The platform’s granular permissions and powerful tools, including automated workflows and consolidated email digests, empower users to create their own interest-based communities, schedule and manage events, and participate in volunteer and mentoring programs. Tap into the power your community can generate for you. Higher Logic, all together.
[00:24:07] Patrick: Giving up politics between now and when you were on the show last, you added another title to your LinkedIn, product manager for community on the NYT story team. Product-wise, what are you excited about right now?
[00:24:20] Bassey: Yes. The way things evolve. I was doing product on the story team and then I moved up to reader experience, and reader experience put in, which I think was great because I handled the product for our machine learning experiment with Google. We’ve gotten that out the door then going into the next phase, we got somebody who’s more. That was great for me because it was like this product needed somebody like me to run the partnership, but then getting into the nitty-gritty of the after, it was great to join reader experience and have their product team come in and be like, “Here’s more like the technical side of the product role” which I think was great.
[00:24:59] Patrick: Broadening your skill set for sure.
[00:25:05] Bassey: I have learned so much. We have just amazing product people here. Sara Rabstenek worked on our post perspective project product. She was just absolutely amazing. The whole team here. What am I most excited about in terms of product is what you asked?
[00:25:16] Patrick: Yes.
[00:25:17] Bassey: I’m excited about figuring out how to scale machine learning. It’s really one thing to implement it and have it work, it’s another thing to be able to say to each individual person who’s working with machine learning, “Here’s how you prioritize your work.” At the end of the day, you’re still going from article to article, some of the comments have been machine learning improved. Some of them aren’t and the work of moderation has kind of changes a little bit less of, “Let’s ensure the civility while getting up as much as possible.” and more of, “Where can I as the human being here best spend every single minute of my time?”
What we found is with the first phase of Moderator, which is the moderation interface connected to the Perspective project, what we found is that we weren’t doing a good enough job displaying a broad dashboard set of data that a human being can make a decision very quickly about, “Okay, what’s machine learning doing well? What’s it not doing well? Where might it need help?” We were still very much focused on this model of, “Okay, here’s a list of all the articles. Go into them and see what’s going on.” Which works fine doing things with manually but if using the machine learning on an article by article basis, you should also use not a prioritization basis.
That’s been an important thing for me, and that’s something we’re working on with the folks from Google right now, the partnership just goes on. I’m pretty pumped about that. Then the next level of that is, can the machine learning ask the human to check something for it? Can you go online and rather than being like, “Okay, what’s the machine learning doing based on this data? I don’t know to look at this data?” Can you just go online as the professional moderator and the machine learning is like, “Hey, this has been waiting for too long.”
We’re having a lot of trouble with these comments. The probability scores for approving these are like all over the map that needs to be reviewed. This article has a bunch of flags and our system thinks that they might be bad too. We didn’t use the exact model that you’re employing across the board, that kind of thing. That’s like the phase three and that’s probably where most large-scale content moderation is going in the industry where you’re just coming out of the stone age here.
All this time, you’re moderating with your hand, you’re putting a chisel to a rock, and that’s your weapon. Now, we’re at the point where it’s like, “I’m just a pilot here controlling my drone remotely and that’s how I’m fighting.” That’s the way we’re going here. That’s the backup thing I’m really excited about. Front end, there’s just so many things you can do with machine learning. You could just list crazy ideas for hours and hours and hours. The most obvious one in this and we don’t even have like a particular timing for it all, the most obvious one is helping people to understand that why we might reject something they’ve already typed before they submit it to save them that feeling of rejection and despair.
[00:28:18] Patrick: That’ll be useful. There’s a feature that I’ve talked about for like 12 years that is very basic, doesn’t use machine learning, doesn’t use AI. I always call it sensor block. I had someone 12,13 14, 15 years ago write from my forums, an adjustment to the traditional word sensor feature. Every word sensor feature essentially works the exact same way, which is that it blocks a comment including certain words in it.
What I wanted was for people to be told that their post was blocked for these words, specifically, they’re shown their posts, they can edit it, and submit it. Once I had that written and installed, overnight, we stopped removing posts for profanity. It just didn’t really happen anymore. Yes, there are some people circumventing it, but you’re talking about probably a 95% reduction in posts removed for profanity because there’s a thought that if you educate people about that, their reaction will be to circumvent, not to adjust, to respect but to circumvent it. That depends on the community.
Reddit might take that differently from karateforums.com, might take that differently from 4chan, might take that differently from Baby Center, whatever it is, right. It’s not a one size fits all approach but it’s really interesting to see how we could educate people along the process of submitting content to make sure that they have the highest probability of being approved.
Everyone loves that automation. Members don’t want to have their content removed after they spend time on it. Moderators don’t want to remove their content and have to talk to the member or deal with whatever the fallout of that is and then have to readjust it or re-approve it or whatever once they try to submit it again if they do. That’s a perfect example to me of like automation done well that everyone would like, everyone would enjoy, which is not always the case.
[00:29:59] Bassey: If you think about this industry, everything old is kind of new again. We relearn old lessons in community. I think because of the fact that the realization from an upper level in most companies, the understanding about the value of community is hard to grasp for so many higher-ups in companies that we get new people coming in or just the people working in the industry, they bogged down on things. You forget big parts of lessons you learned and then it comes around again and you’re like, “Yes, that’s right. I have three pages of very detailed thoughts in my head about this exact issue, I just haven’t thought about it in two years.”
I feel like that’s a very common thing for folks all across the industry. I think we can be, I’m very excited about the Quartz, the change of emphasis they’re having on their apps to really emphasize the people’s submissions or the expertise people are adding to stories in the article’s design. I just think experiments like that are just so important because I’ve long given up on trying to solve the chicken-egg problem with community investment. I think probably everybody listening to the podcast right now probably knows what I’m talking about, the chicken-egg problem of justifying resources being spent on community.
I don’t know if it can be solved without somebody full-scale who already exists making a bet on it and seeing what happens to a certain extent. I’m excited to see more teams doing that, I’m excited to see like Google being really interested in the space, obviously, the Times being super interested in the space, and a lot of really cool machine-learning companies that are becoming more advanced to this kind of thing.
I just think the really important thing for us in the industry is probably going to be avoiding that old tech problem which is like, “There’s this piece of technology and it can solve everything.” It’s like no, the technology is really a tool. Just because you have a good tool you can’t just have one person wield it.
[00:31:57] Patrick: That’s a whole another chicken and egg problem. [chuckles]
[00:31:59] Bassey: Right, exactly. Like I hate that I keep using this like wartime imagery, I don’t know why, but it’s like the Civil War, if you give one guy an assault rifle that’s like well, that’s kind of like tech world sometimes like, “Okay, we’ve solved the problem. This one guy has an assault rifle.” It’s like, “Well, no you’re going to need a lot of people using the same tool to truly be effective.” Hopefully, we can move the industry more toward that level and I think that’s the way you really kind of move the needle in terms of making online societies safer and not kind of like a lot of applications abroad.
[00:32:28] Patrick: Back up for a second, just to provide some context for people. There was a lot of media coverage about the partnership with Jigsaw which is a technology incubator from Google parent Alphabet. What essentially just means Google.
[00:32:40] Bassey: At the time, we just call it Google’s Jigsaw, we don’t even–
[00:32:42] Patrick: Google’s Jigsaw. Alphabet, just drop A to Z, it’s gone. The whole thing was about using machine learning and AI to better moderate comments and there’s an API out there, the prospective API you wear for instance. There’s tools that people can use and implement into their own projects but the media coverage on that was in June 2017. That was sort of the big reveal, but you’ve been at it for longer than that behind the scenes and so, you’re about a couple of years into using these tools. What is machine learning great with?
[00:33:10] Bassey: I think machine learning is great with understanding when we absolutely do not need to spend time reviewing a comment for publication. It is so rare to see an error that the machine learning thinks is almost certain to be approved that isn’t almost certain to be approved. That means we can spend our time really working on difficult articles rather than like going into some business story or something and spending hours and hours filtering through comments just to have 95% of them approved.
You just take a big chunk of things that are probably be going to be approved. You can check it, just do a quick sanity check, put all those through, and then go about your business. That’s what it’s really great at and that it just saves so much time and effort and really just helps improve the moderation of everything else that really needs moderation.
[00:34:01] Patrick: You mentioned before the show that one of the challenges you have to think through is how to decide how things get prioritized between humans and AI to get the most out of each. What’s the current answer?
[00:34:14] Bassey: The current answer we have right now is that the machine learning handles approvals and only humans handle rejections. Machine learning right now, it generally has an intrinsic bias problem, which if folks listening aren’t familiar, it basically boils down to this because references to underprivileged groups online are more likely to be abusive than people who are referencing their own underrepresented group, in their own comments are going to be marked by machine learning as more likely to be rejected.
It boils down to, you can say- this is a huge simplification. I don’t know if this is exactly true, but this is a good way to think about it. You can say, “I am a man” might be 98% likely to be approved. If you say, “I am a woman,” maybe it’s 90% likely to be approved. If you say, “I am a black woman,” then you might get down to 82%.
If you just let machine learning models just operate in the wild without trained human intervention, what you’re going to be doing is perpetuating a cultural cycle of silencing certain people’s voices, filtering out a critical mass of people from certain communities. Wouldn’t it be quite ironic if the thing that’s supposed to save us all, technology, only winds up taking all the human biases and codifying them into code so that, unaccountable executives can say, “Oh, well, the model does this. We’ll try to fix the model sometime. We just don’t understand the complexities of the model.” That’s the go-to, for a lot of companies these days.
With a strong enough ratio of humans versus content received, you can counteract and with good training, you can essentially counteract those effects. That said, it still needs to be fixed. That’s something that, I know, on the Google end, they’re very interested in training their models to eliminate those effects. They may have had actually some successes there but I’m not aware behind the scenes since I last talked to them about this, I don’t know.
Then I know on our end, the Times, we’re doing a lot of talk about just positioning the company for the future in terms of how, where we might use machine learning responsibly. How to go about that at a corporate level so that doesn’t rely on any one person. That’s something we’re getting involved with, as well. Hopefully, we’ll be able to talk about more in future.
[00:36:51] Patrick: Is the answer always manual intervention?
[00:36:54] Bassey: The answer is always manual intervention until you have solved the problem.
[00:36:58] Patrick: It’s a good answer. Until it’s fixed, humans are needed, then we’ll trust the machines a little bit. Otherwise, machines unchecked are just going to exacerbate a bias and make it even larger and more pronounced and amplify it.
[00:37:09] Bassey: Right. Exactly. Let’s say we’re taking a business story at the Times. It’s our intention to approve the most benign percentages of comments. We want to approve all the comments that the model says are benign. Let’s say, we’re using 10% aggressiveness, we’ll test that for a while. It works, then we’ll go, “Okay, maybe let’s test out 20%.” We’ll review that for a while, see, it’s downstream effects, it works. Go to 25, and it’s like, “Okay. We’re safe. Stop there for a while, and now, we’ll see if, we’ll do another round and we update the mode.” The machine learning models will keep going until we get to a hundred. It just feels like with the advent of a lot of these social companies expanding to cultures they don’t understand, it just seems so much of it was, “Hey, we have this thing that works. Let’s just apply it to 100%, and then we’ll see what happens. Then we’ll fix whatever happens after that.” It’s like, “Yes, that’s fine if you’re shipping boxes, or something and someone gets the wrong size box. If 10,000 people die, you’re probably going to want to change that formula, possibly?”
[00:38:17] Patrick: Move fast and break things doesn’t work very well. When it’s human, zero breaking.
[00:38:21] Bassey: Exactly. There’s this fundamental immorality in large parts of the tech world, not an active immorality but an immorality of because your process has been successful, you must continue to follow it as opposed to respecting the impact of your product on humankind. That’s the most dangerous thing for the future of machine learning. I don’t know if this industry has a long-term future unless that’s figured out because eventually, something is going to happen so badly that it’s going to start being legislated in ways that are unpredictable and chaotic.
[00:39:00] Patrick: I feel like we’ve seen plenty of movies especially us people of our age have seen plenty of movies about we finally got a chance to make stuff and we made it just like the movies where the robot go crazy and the government has to come in and fix it and then it’s a project to the defense companies and then it goes worse and things go wrong. We just didn’t pick up on the ’80s movies for whatever reason.
[00:39:22] Bassey: Some people would ask, in reaction to this, to what extent does capitalism scale? Are we simply just at a scaling problem with the economic model of the world right now it or is it something more banal? At some point, governments regulated certain kinds of activity, but they did it in such despotic ways that we threw out the baby with the bathwater to a certain extent. I think that’s the big question that probably needs to be resolved in societal level to see this become saved.
[00:39:50] Patrick: Let’s stick with economics for just a second. You mentioned the chicken and egg problem. When you were on last, you said this, you said, “There’s a pretty discrete value in terms of getting somebody to become a registered user on your site. It’s a step that you can look at financially every step along the line and say, ‘Okay, this person is X more likely to subscribe because this happened. Once you use techniques to pinpoint where comments and when comments were involved in that, then that is when you really start to justify spending money to make sure that the comments are of really high quality and they reflect the value that you’re trying to advertise to readers.”
Have you built on that and made that data better?
[00:40:30] Bassey: Yes. Really happy to be able to say that to you, and I think data like that is behind the creation of the reader experience team. It’s a bit different for us. Throughout this interview, I have spoken generally about the industry. It’s a little bit different for us because we’re in transition to being a consumer revenue, more consumer revenue company than an ad company.
Our incentives are a little bit different now, and when we made that, we really leaned into that these past couple of years. We’re getting there, we’re running a series of tests to figure out what are the more fine-grained levers we can pull. Honestly, we’re a lot better at this. We understand what we need to do now more than we ever have been. We’ve invested a lot more into the development of this, but we’re still at the slow building point.
We’re not at the point where it’s like, we built this up enough to where we can really test to see if our fundamental assumptions about community like the big idea assumptions about what it could do for an organization are true in the ways we think they are. The answer is yes, but it turns out that if you would have asked me back then, “Where are you going to be in terms of acting on that statement?” I would think we would be further along than we are now. Then again, I look back at everything we’ve done, it’s like, “Whoa, we have gotten a lot of work done towards this.”
So much done and it’s been so difficult in so many ways because there are so many little minor things about the implementation of things, or how technology stacks work, or how you respect people’s privacy rights, how you bring recorders into the process, that it’s a slow process. We’re really getting there. I’m really excited about that.
[00:42:16] Patrick: To push on that a little bit, when you say, we know what we need to do and we’re getting better to understanding what we need to do, are you talking about almost a post-conversation where you have these numbers and you know that subscribers, the commenters pay at this rate or more likely to sign up at this rate. Now, you’ve moved to the idea of, we know that once someone takes this action or once we get someone to take this action, they are therefore X% more likely to become a subscriber? Are you talking more about actions that then drive that bottom line, and the bottom line is established?
[00:42:45] Bassey: Exactly. That’s exactly we’re talking about. We move down from the first questionnaire to a series of intensive tests to figure out where the lines are in the second one. We’re literally in the middle of that process right now. It’s pretty exciting.
[00:43:00] Patrick: That’s very smart. The last thing I want to ask you about. You mentioned something in your questionnaire that I found really interesting because I don’t think anyone else has mentioned it before. It’s something that is really important to me, and how I approach my team and those around me. You mentioned that you regularly spend time on the career development of your team members.
I can’t say enough that I think that is a defining trait of a great boss and a great leader. I was curious to hear how you approach that. How do you approach the career development of the team you lead, which is 14 moderators plus contractors? Talk about that.
[00:43:33] Bassey: Yes. I think, the first half of it is just going ways back, just figuring out generally what are people interested in. Once you figure that out, once you’ve internalized that, once you understand that, it’s more of a process of they’re moderating. I deal with a lot of the newsroom leadership and everything else. I handle all that; political stuff, all the staff, all that kind of stuff.
Really, it’s about keeping my eyes and ears open for opportunities. It’s good because I control the purse strings of the department to a certain degree. Whenever there’s somebody in the newsroom who needs something that seems like it is related to the mission of community in some way, I will immediately be like, “Hey, there’s actually somebody in my team who has a lot of really smart thoughts about that.” You can just, even those little things, setting up a meeting for them. When I’m brought into big projects saying, “Okay, well–” When I’m being brought to that project, “Hey, can I bring in one or two from my team?” so that they can meet the rest of the people. Honestly, I always feel badly, frankly, because you can never be effective enough at it especially at an organization.
There are set defined roles and there are set times where people have to work. We have a mission we just have to achieve, and that’s what I’m being paid for, to achieve that mission. It’s a tough thing to prioritize a lot of ways, but just as time goes on, trying to get better and better at finding chances. Whenever I have a chance, whenever it works out perfectly that somebody will have a chance to help out in a certain area, to give that person the opportunity.
Then the people who I haven’t been able to get the opportunities I know they deserve yet just keep pushing on different projects from a newsroom that might yield space for them to be able to participate in a way that makes sense for their work schedule. It’s just an ongoing thing and it’s tough because you just always feel like you’re failing at it. You can ever do a good enough job. I just try and keep at it, and work on it at least a little bit every single day.
[00:45:31] Patrick: I think that the fact that you’re even thinking about that puts you ahead of so many people because it’s very easy and very comfortable. A lot of people do this, I’m sure we all know people who work under someone who wants them to be the same thing they are now until they die because they’re good at that thing and because they make the person’s life easier that they report to.
I have a thing I say to people when they’re interviewing with me, “I don’t collect people.” I’m going to help you on your path. Two years from now, you will be in a different spot. You will be promoted here internally, you will have a job elsewhere that I will help you get, or you will take my job if you want it. If that’s what you want to progress in this industry, I can help you. I’m well connected, I will help you. I don’t collect people.”
Part of that might be ego. I believe I can bring in more people. I believe I can train people and I can build the next generation of community leaders. Part of it might be ego, but part of it is just that that’s the person I want to work for because two years from now, I don’t want to be in the same place I’m in right now. I want to be in a different place myself.
I only want for them what I want for myself, but a lot of people don’t look at it that way. A lot of people are very comfortable and want to collect people as much as they can. What drives that in you, that desire to find other roles for people in the company or help them on their path higher than where they might leave you, go to a different team, even though they’re strong players? What do you think drives that within you? Because it’s a choice you’re making. I love the New York Times. I’m going to assume not everyone at the entire company views their underlings the same exact way. I could be wrong. It could be a culture thing.
[00:47:01] Bassey: That’s probably fair.
[00:47:02] Patrick: I think it’s from you internally. There’s something there that says that this is something I want to do for my team.
[00:47:08] Bassey: Yes. On one level, I think the Times does need people throughout the org who are very familiar with community. I think that’s just so important. I’ve seen the effects, just people even just they have to leave or people who work for other teams, I’ve seen the effect they have. A lot of it is just such common sense stuff for folks like us who work in community but, a lot of it is just uncommon. You don’t realize how much uncommon wisdom you have until you work in community for a while, then you go outside of it but can still apply some of your knowledge, and you realize, “I’m some kind of brain genius.”
I’ve been in charge of this team for a while, but I haven’t always had the ability organizationally to be able to be able to help out anybody with anybody’s fate. We’re all just rolling together, and I was the guy in charge, and that was it. Now that I am in a position of little bit more leadership around the newsroom, I don’t want it to be one of those things where we’re already together and then I rise up and out and I’m like, “See you all later.” I want to be able to get people as far along as I can. There’s going to be bumps along that road, but I just want the opportunity that I’ve had to move up a little bit.
Other people deserve to have it too because having worked with them, I know how good they are. I’m not necessarily better than anybody at anything besides the depth of my dimples or something. I just feel like everybody ought to have that opportunity.
[00:48:32] Patrick: I love that, Bassey. It’s been a really great conversation. Always a pleasure, always enjoy talking with you on the podcast and off the podcast. Thanks for making time.
[00:48:40] Bassey: Great being here.
[00:48:42] Patrick O’Keefe: We have been talking with Bassey Etim, community editor for the New York Times. Follow Bassey on Twitter @basseye. That’s B-A-S-S-E-Y E.
For the transcript from this episode plus highlights and links that we mentioned, please visit communitysignal.com. Community Signal is produced by Karn Broad, and Carol Benovic-Bradley is our editorial lead. If you celebrate it, happy Thanksgiving. We’ll see you next time.
Your Thoughts
If you have any thoughts on this episode that you’d like to share, please leave me a comment, send me an email or a tweet. If you enjoy the show, we would be so grateful if you spread the word and supported Community Signal on Patreon.
3 comments