Section 230 and the Freedom to Remove Objectionable Content
Section 230 is a vitally important law for online community builders in the U.S. That’s why we’ve consistently talked about it on Community Signal, and the growing threat to its existence.
The volume of legislation being proposed, that would amend Section 230, is increasingly rapidly with 6 bills proposed in September and October alone. These bills will impact online communities small and large – not just Big Tech.
Whenever new Section 230 altering legislation is proposed, Jess Miers analyzes it. Jess works as a legal policy specialist at Google, while finishing up a law degree, and she joins the show to talk about what’s on the horizon.
Jess and Patrick discuss:
- Why are legislators so focused on Section 230 right now?
- Trends from the bills that are on the table
- Regulators efforts to stop communities from moderating things that aren’t illegal
Our Podcast is Made Possible By…
If you enjoy our show, please know that it’s only possible with the generous support of our sponsors: Vanilla, a one-stop shop for online community and Localist, plan, promote, and measure events for your community.
Big Quotes
Why are there so many Section 230 bills right now? (4:15): “If I have to be really cynical about this, we’re in an election cycle. Doing something about Section 230 looks good for regulators that are trying to get reelected. It seems like everybody has fallen out of love with the internet, both on the regulatory and user side. Section 230 is just one of those things that’s easy to come up with a proposal for and generate a lot of excitement around.” –@jess_miers
More honesty from Big Tech could lead to more understanding from legislators (22:09): “If Big Tech was a little bit more forthcoming about [the moderation] challenges they face, maybe we wouldn’t have such a blind spot in the regulatory process when they’re coming up with content moderation restrictive bills.” –@jess_miers
Are you sure you can’t increase your moderation, trust, and safety budget? (22:27): “There is an issue [when] a company has problems around content moderation and they talk about how they’ve reached a certain limit or they couldn’t hire anyone else and yet, they report record revenue. That’s a problem because sometimes you’re placing artificial restrictions on yourself with staffing because you made this much money and you decided you didn’t want to spend it.” –@patrickokeefe
Should Section 230 be amended? (34:10): “Amending Section 230 isn’t going to fix any of the issues that we have with the internet. It’s just going to make it harder for us to fix those issues in the long run.” –@jess_miers
About Jess Miers
Jess Miers is a third-year law student at Santa Clara University School of Law, where she studies internet law and policy. During law school, Jess was a legal intern for Twitter and TechFreedom, a technology policy think-tank based in Washington, D.C.
Currently, Jess is a research associate for the UCLA Institute for Technology, Law, and Policy, where she speaks and writes about intermediary liability law. Her scholarship primarily covers Section 230 and content moderation. As of recent, Jess is also a full-time policy specialist at Google.
Related Links
- Sponsor: Localist, plan, promote, and measure events for your community
- Sponsor: Vanilla, a one-stop-shop for online community
- Anette Beebe on Community Signal’s episode about Trump’s Executive Order
- Jess Miers’ website
- Google, where Jess is a legal policy specialist
- UCLA Institute for Technology, Law and Policy, where Jess is a research associate
- Text of Don’t Push My Buttons Act (see Jess’ redline documenting proposed Section 230 changes)
- Text of House version of EARN It Act (Jess’ redline)
- Text of See Something, Say Something Act (Jess’ redline)
- Text of Protect Speech Act (Jess’ redline)
- Text of Online Content Policy Modernization Act (Jess’ redline)
- Stop Enabling Sex Traffickers Act (SESTA)/Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA)
- Text of Section 230
- Zeran v. America Online Inc.
- Fair Housing Council of San Fernando Valley v. Roommates.com, LLC
- Text of Stopping Big Tech’s Censorship Act (Jess’ redline)
- Text of Platform Accountability and Consumer Transparency (PACT) Act (Jess’ redline)
- Facebook will pay $52 million in settlement with moderators who developed PTSD on job, about Serena Scola, at al. v. Facebook, Inc. case (via The Verge)
- Fair Labor Standards Act
- Children’s Online Privacy Protection Act (COPPA)
- Jess Miers’ Medium blog
- Jess Miers’ Section 230 talk at TEDxSantaClaraUniversity
Transcript
[00:00:04] Announcer: You’re listening to Community Signal, the podcast for online community professionals. Sponsored by Vanilla, a one-stop-shop for online community and Localist, plan, promote, and measure events for your community. Tweet with @communitysignal as you listen. Here’s your host, Patrick O’Keefe.
[music]
[00:00:28] Patrick O’Keefe: H Patrick O’Keefe: ello, and thanks for making time for Community Signal. Section 230 is an essential law for online community builders in the US. That’s why we’ve continually talked about it on the show. Emboldened by Donald Trump’s executive order in May, lawmakers are ramping up efforts to undercut the law and we’re seeing an incredible volume of legislative activity. To help make sense of it, we’ll be talking with Jess Miers, a policy specialist at Google who has made Section 230 her specialty.
I feel like a Bernie Sanders meme here, but I am once again asking you to care about Section 230. But before we get started, I’d like to thank Maggie McGary, Marjorie Anderson, and Jules Standen for being among our supporters on Patreon. If you’d like to join them, please head over to communitysignal.com/innercircle for more info.
Jess Miers is a third-year law student at Santa Clara University School of Law where she studies internet law and policy. During law school, Jess was a legal intern for Twitter and TechFreedom, a technology policy think tank based in Washington DC. Currently, Jess is a research associate for the UCLA Institute for Technology, Law, and Policy where she speaks and writes about intermediary liability law. Her scholarship primarily covers Section 230 and content moderation. As of recent, Jess is also a full-time policy specialist at Google. Jess’ opinions are her own and do not represent her previous or current employers. Jess, welcome to the show.
[00:01:49] Jess Miers: Hey, thanks for having me.
[00:01:50] Patrick O’Keefe: It’s great to have you on. I don’t think I get any better Section 230 bill analysis from anyone I follow or any outlet than I do from you.
[00:01:59] Jess Miers: [chuckles] I’m honored. That’s awesome. Hopefully, I can be in somewhat help here. There’s a lot to go over.
[00:02:06] Patrick O’Keefe: If you think my takes on Twitter sound smart, I think that Jess probably deserves a little bit of the credit for that because oftentimes, when a new bill comes out, I’ll check and see if she’s done an analysis or at least performed a red line and I’m sure I’m not alone in that.
[00:02:20] Jess Miers: [chuckles] Yes, it’s interesting. I started doing the red lines earlier this Summer just because we were getting hit day after day, week after week, it was getting really hard to track. It just became my role.
[00:02:32] Patrick O’Keefe: You talk about getting hit. In the space of a month we’ve had, I think, six Section 230 altering bills floating around. These include the Don’t Push My Buttons Act from Senator John Kennedy and Representatives Paul Gosar and Tulsi Gabbard, Ann Wagner and Sylvia Garcia’s House version of the Earn It Act, the See Something, Say Something Act from Senators Joe Manchin and John Cornyn, Representative Jim Jordan’s Protect Speech Act, Lindsey Graham’s Online Content Policy Modernization Act and the Online Freedom and Viewpoint Diversity Act from Graham alongside Senators Marsha Blackburn and Roger Wicker.
The last two of those are the same thing as far as 230 goes but that’s six bills in a month. I can’t remember the last time that we saw this much legislative volume regarding Section 230. Is this as unique as it seems?
[00:03:26] Jess Miers: I started getting into Section 230 right around FOSTA-SESTA. I want to say, for me, what I’ve tracked, this is definitely unique. I imagine with the FOSTA-SESTA bills, there was a lot of work around that and a lot of press around that but for where I started with Section 230, this is absolutely unique. I have not seen to this extent this much regulatory action on the law.
[00:03:51] Patrick O’Keefe: Why now? Is there a reason why so much is being thrown at the wall right now?
[00:03:56] Jess Miers: Yes, I think there have been discussions about this might be a way to delay the Supreme Court confirmation hearing. There’s been discussions about doing everything but working on the dumpster fire that is our country. Section 230 is also low hanging fruit. If I have to be really cynical about this, we’re in an election cycle, doing something about Section 230 looks good for regulators that are trying to get reelected. It seems like everybody has fallen out of love with the internet both on the regulatory and the user side. Section 230 is just one of those things that’s easy to come up with a proposal and generate a lot of excitement around.
[00:04:37] Patrick O’Keefe: It’s a quick little law. Whenever I see these red lines, it’s like, this will grow the bill by 10%, 20%, 40% just by adding a little bit of text to it, but I want to talk about these laws specifically to discuss trends that we’re seeing in legislation related to 230 and areas where lawmakers are focusing in.
The first one that jumped out at me is how 230 is sometimes being used as a vector to attack issues that aren’t really tied to the purposes of 230 like antitrust, security and privacy concerns. For example, the Don’t Push My Buttons act conditions liability protection on data collection while the Earn It Act does a similar thing with providers who offer end-to0end encryption. Why are they trying to tack these issues on to 230 as opposed to another existing law or even treating them as a standalone problem?
[00:05:30] Jess Miers: I think mainly because Section 230 is an easy lever for regulators to pull. Section 230, we often describe it as the lifeblood of the internet, the internet’s backbone, it’s the law that created the internet and so regulators know that not only Big Tech, but small internet companies, but I guess mostly Big Tech rely on their Section 230 immunity.
Basically, it’s almost being used as leverage. If you want to keep Section 230, if you want to give, as Nancy Pelosi calls it, if you want to keep this gift or this subsidy that Congress gave you way back in the 90s, then you’re going to do X, Y and Z, regardless of whether it has anything to do with content moderation or the intention behind Section 230 in the first place.
[00:06:13] Patrick O’Keefe: Which is just incredibly frustrating because these are areas where I have concerns. I do have anti-trust concerns, I do have data concerns. I think Facebook has mishandled data in some cases and I don’t think that’s a minority position in America if you polled Americans, but just call it the Facebook Modernization Act and leave me the heck out of it.
[00:06:35] Jess Miers: I think a lot of companies would prefer if all the regulators would start targeting Facebook. It’s an interesting point that you make. I feel like a lot of the issues that users and even regulators see when it comes to the internet is it comes down to usually one not so great actor. A lot of these laws almost seem like it’s the Facebook law or the Google law, instead of targeting what the underlying issues are with the internet.
It’s let’s go after the company but unfortunately, what these bills really miss is that when you’re going after the Facebooks and Googles, you’re also targeting the rest of the internet. Just the way the internet works. There’s no way to go after Big Tech without also setting the bar at a level that small tech can’t reach and it eventually closes small tech out of the market entirely.
[00:07:23] Patrick O’Keefe: Let’s take a moment to talk about our generous sponsor, Localist.
Localist is an event marketing platform that aggregates, automates, and analyzes all of your virtual, in-person, and hybrid events so you can grow and engage your community. Their platform allows you to centralize your event calendar, automate email and social media promotion, and measure and analyze the ROI of your events. Localist integrates with your existing tools and you can even predict future event success using their Event Reach and Event Score features. Find out more at localist.com/communitysignal and take your event strategy virtual with Localist.
Another trend I’ve noticed is there is this phrase in 230, “otherwise objectionable.” Communities can remove content that is “otherwise objectionable” and to me, this makes a lot of sense because I would say probably most online communities deal with issues for the most part that don’t qualify as things that are illegal. As an example, I run a martial arts community and we don’t really deal with much that has to do with illegal, we deal with things that are more personality based or conflict based like a big ego or a tough guy who wants to be rude on the internet.
We remove content that is otherwise objectionable and Section 230 provides for that ability, but this phrasing is being targeted by multiple bills, including the Online Freedom and Viewpoint Diversity Act, the Online Content Policy Modernization Act and the Protect Speech Act as well as the DOJ’s proposal from last month. Why has that phrasing become such a focus?
[00:08:57] Jess Miers: “Otherwise objectionable,” it’s almost like a catch all, as you described it. It’s interesting, it recognizes that the internet has all of these different online communities and what’s objectionable on Facebook or Google or Twitter is maybe not objectionable to some other online communities, whether we feel that way or not. As your listeners know, there’s a lot of communities out there that harbor speech that you and I might not like, but that’s really core to that community. What the intent was behind it is that it allows these services to moderate as they see fit for their audiences.
Unfortunately, where we are right now, I think regulators and again, users, and this is probably to the fault of we just don’t have a lot of good Section 230 education advocacy out there is “otherwise objectionable” has become this excuse that regulators think internet companies have to moderate viewpoint based speech or to not be neutral, even though there never was a requirement that these websites should be neutral. They think, for conservative bias.
For example, they’re moderating under this “otherwise objectionable” standard, that means that they can do whatever they want, toss out or any kind of speech that they want unconstitutional, constitutional, legal, illegal whatever, that it’s being pinned on this otherwise objectionable provision. The irony in all of it is that content moderation is really actually housed under (c)(1) the provision that simply says that websites are not liable for third party content and has nothing to do with that otherwise objectionable line found under (c)(2)(A).
[00:10:21] Patrick O’Keefe: Talk about that more. I know you told me before this show that you felt it would be helpful to have a conversation about (c)(2)(A)’s intended purpose, as your word your backfill to (c)(1), since so many regulators and policy wonks get this wrong. Talk about that a bit.
[00:10:38] Jess Miers: The whole idea with Section 230 (c)(1) and Section 230 (c)(2)(A), so Section 230 (c)(1) basically, like I said before, just says websites are not liable for third party content, if you want to just break it down to that.
[00:10:50] Patrick O’Keefe: It’s a short sentence. It’s a full stop.
[00:10:53] Jess Miers: Right, full stop. We had the Zeran v. AOL case. It’s the seminal Section 230 case. Then the conclusion to that or the holding to that was basically websites are not liable for their editorial publishing decisions. I mean, editorial decisions in that it’s not just they’re not liable for the speech that you and I put on services, but they’re also not liable for the decisions that they make per their editorial discretion so that would be whether Twitter wants to put fact-checking on the president’s tweets or they want to remove content or they want to leave content up or they want to leave content up that’s even violating their terms of service.
That’s still protected by their editorial discretion. There’s a lot of different torts that (c)(1) has been applied to and that the courts have applied the (c)(1) provision too. I want to say it’s like 90% of Section 230 cases have actually been resolved under Section 230 (c)(1). Again, that has to do with not just the underlying speech, but with the service’s actual content moderation decisions.
Where (c)(2)(A) actually comes in is in a really rare situation where a website can’t take advantage of Section 230 (c)(1), and that might be because they maybe contributed to or helped create the content in part, like the Roommates case, but they are still making actioning or removing the content as well. They’ve had a hand in both. That’s where the (c)(2)(A) comes in and says, “Okay, well, if (c)(1) doesn’t apply in rare situations, then you can use (c)(2)(A) as a backfill to apply to your editorial discretion and apply to these content moderation decisions as well.”
It’s incredibly rare. We don’t have a lot of cases under (c)(2)(A), but for some reason, regulators think that the courts are obsessed with turning cases over under (c)(2)(A) for content moderation. That’s never been the case and that’s never been the intent of (c)(2)(A) in the first place.
[00:12:45] Patrick O’Keefe: There is this push to make the wording so specific, and it’s a classic community problem. I don’t like community guidelines that say, “Don’t be a jerk,” because I don’t think it’s helpful and we all have a different idea of who’s a jerk. You need some detail, you need to describe the tenets of the community, but you don’t want to paint yourself into a corner by being too specific.
So many people manage small online communities that don’t really see content that fits into these categories. For example, terrorism and terroristic threats. I’ve reviewed millions of pieces of content and I don’t think I’ve ever seen anything that I would qualify as terrorism. I’ve seen self-harm, I’ve seen depression, I’ve seen content that suggests someone might become a victim of suicide but I haven’t seen terrorism. I haven’t seen pharmaceutical trafficking outside of maybe some super lame spam. The big platforms do see these things because they have the volume where they attract those folks but it’s a big law that applies to a lot of people and that specificity can be really harmful.
[00:13:52] Jess Miers: Yes, absolutely. You hit it right on the head. Anyone that is trying to break the internet down to specific types of content, in my opinion, really doesn’t understand the internet and doesn’t understand the intent behind Section 230 in the first place. There’s a common discussion about how Section 230 needs to be amended today because when it was written all the way back then in the ’90s, it assumed for a ’90s Internet, and it’s outdated now. It needs to be updated.
I think that kind of viewpoint really takes the stance that we are done innovating on the internet, that there is nothing more that’s left. There’s no new types of content, no new types of communities that are left. We are done and we are ready to decide what content needs to be built into these bills. I think that’s very unfortunate because the authors of the bill when they wrote it back in the ’90s, I think they knew the Internet has this amazing, innovative potential and not just the internet, but the people who are using the internet have really creative potential as well.
I find it absolutely unfortunate when they’re taking out or trying to list or break the internet down to issue types. I’ll point out one more interesting issue type in this, you brought up self-harm. A few of these bills actually, they’ve written in self-harm into Section 230(c)(2)(A). There’s some nuance there in that just because it says self-harm doesn’t actually mean that it’s going to apply to content that, for example, promotes self-harm.
You might have the live video or the unfortunate live Facebook stream of somebody who kills themselves on camera. It’s awfully morbid and that would fall for self-harm. Would it fall under self-harm if you’re now filming, which is what is happening right now is a trend on TikTok right now, if you’re filming kids reacting to the Facebook live suicide, would that really fall under self-harm?
An even more obvious one is telling somebody to kill themselves. Does it cover promoting self-harm or does it only cover the actual act of self-harm itself? These are things that these regulators when they’re writing these bills, I don’t really think they think about.
[00:15:51] Patrick O’Keefe: Something that really jumped out to me about the Protect Speech Act is there’s this whole section on good faith removal. I’ve seen this sort of thing elsewhere, where they’re trying to establish or further establish some standard of good faith. There is a sentence in particular that stuck with me. The Protect Speech Act says, in order to maintain your 230 protection, you must be a provider that “does not restrict access to or availability of material on deceptive grounds or apply terms of service or use to restrict access to or availability of material that is similarly situated to material that the service intentionally declines to restrict.”
“Similarly situated” seems like such a loaded phrase to me. If I want to ban Democrats and their content from my forums but I choose to allow Republicans and their content, I now need to allow the Democrats to post in the forums if I’m a GOP focus forum, for example. I’m not sure if I’m bringing that right, but that phrase, “similarly situated”, it scares me.
[00:16:53] Jess Miers: I think you’re right to be a little worried or frightened by that. I think the Protect Speech Act, it’s got some undertones of the president’s executive order from back in May, I believe. If you’re going to carry one user’s political affiliated speech, then you’re going to have to carry another user’s. It’s got this flavor of an internet fairness doctrine. We saw this with Loeffler’s Stopping Big Tech’s Censorship Act as well. We saw that earlier this Summer, there seems to be this trend of viewpoint neutrality or must-carries for specific political types of speech that keeps cropping up as well.
[00:17:31] Patrick O’Keefe: Let’s pause for a second to talk about our generous sponsor, Vanilla.
Vanilla provides a one-stop-shop solution that gives community leaders all the tools they need to create a thriving community. Engagement tools like ideation and gamification promote vibrant discussion and powerful moderation tools allow admins to stay on top of conversations and keep things on track. All of these features are available out of the box, and come with best-in-class technical and community support from Vanilla’s Success Team. Vanilla is trusted by King, Acer, Qualtrics, and many more leading brands. Visit vanillaforums.com.
Another thing I realized when I was reviewing all of the September and October bills is that not a single one tried to exempt smaller communities and platforms from any of these restrictions. Not even the See Something, Say Something Act, which has this weird, suspicious transmissions thing in it that sounds like it’s coming out of the KGB. I don’t like Senator Josh Hawley’s attempts at Section 230, but at least he included something along those lines. Is there a massive blind spot here with legislators understanding the volume of communities this protects?
I’ve worked with big brands and small ones, companies that had money and didn’t. For the ones that don’t, there is no real option of staffing up. There’s no outsourcing, no 24-7 option. Sometimes it’s just a teenager in his room like I was. Is this something that will be recognized if these go farther down the line or is it simply a blind spot?
[00:18:57] Jess Miers: I think there’s a few good points that you make here. I’m going to push back though on exempting for smaller communities. At the start, that sounds good and I believe that PACT Act is one of them that says, this is not going to apply to websites that have X amount of revenue and X amount of users. There’s a few major issues with scaling back these laws for smaller companies and you mentioned the anti-trust is something that worries you. One major one is that it incentivizes these smaller companies to actually sell out before they get to a point where they reach the threshold of liability for some of these acts.
Who are they selling out to? Well, they’re selling out the Big Tech. You’ve got Google, Facebook, Amazon, that’s just one really good way to ensure that Big Tech swallows our market entrance because the market entrance at the end of the day, they want to make money and they don’t want to have the incredible liability risks that these bills put on them. That’s the anti-trust market competition issue with that.
Another issue we’re trying to carve out smaller communities and smaller services is that it’s real hard to actually write that measurement. For example, Twitter is a small company, you may not think that but Twitter, by scale of employees, by number of offices, is actually a relatively small company. Another one that’s very similar is Reddit. Really small number of employees, probably not an exorbitant amount- definitely not making the revenues that like Google, Facebook make, but tons and tons of users on both of those.
Do you call Twitter and Reddit a big company because they’ve got tons of users and lots of user generated content or do you call them a small company because they don’t have the employees and the resources to be able to manage the content in the first place?
I think, trying to scope out what a big and a small company is is incredibly difficult and for the bills that had tried to do it, I think it’s raised a lot of questions and issues, and even more blind spots that we hadn’t thought about when we tried to carve them out entirely. Again, those carve outs really points out kind of what the intent is, and that is to go after Facebook or go after Big Tech.
I want to say yes, there is a blind spot when it comes to regulators but at the same time, I’ve been giving this a little bit more thought mostly in the new role that I started at Google and just from what I’ve seen an industry from doing content moderation, helping to moderate communities as well, is that yes, regulators do have a blind spot but I think these bigger companies that do content moderation could actually be a little bit more forthcoming about their content moderation practices and processes. Also like, not just the way that they moderate content, but their struggles as well.
I think civil society and academia does a really good job of saying, look, there’s much content on the internet and these companies, it’s really hard for them and much for point where I think the the talking point continues to be, the smaller companies are going to struggle on the bigger companies, they have the money to do it. In reality, it’s actually really tough for these bigger companies to, especially when you’re talking about trust and safety departments that don’t really serve to make the big companies money.
They might be being resource constrained by the number of people that they’re allowed to employ in trust and safety, by the types of tooling that they have. There’s a lot of difficulties there that I think if Big Tech was a little bit more forthcoming about their challenges that they face, maybe we wouldn’t have such a blind spot in the regulatory process when they’re coming up with these kind of content moderation restrictive bills.
[00:22:22] Patrick O’Keefe: Let’s just say over a billion dollars. I’m just kidding but there is an issue that occurs where a company has problems around content moderation and trust and safety and they talk about how they’ve reached certain limit or they couldn’t hire anyone else or there’s this problem or that problem and yet, they report record revenue and that’s a problem because sometimes you’re placing artificial restrictions on yourself like with staffing because hey, you made this much money and you decided you didn’t want to spend it but what’s your business? What does it really relate to?
Facebook has been roundly criticized because of their handling of moderators. There was the School of Facebook case recently around how they handled moderators in relation to mental health. There’s a lot of bad stuff happening there. As far as 230, I would rather it not be adjusted but there are laws like the Fair Labor Standards Act, which does have a minimum threshold and applies to organizations making over 500,000 a year. There are some other parts of that minimum as well but it can relate to volunteer forum moderators. Frankly, most communities don’t make $500,000 because it’s a big internet.
Google knitting forums, Google fly fishing forums, there’s a lot of small communities out there but hey, that fly fishing forum, even if it’s just one person and it’s a small community, fly fishing is a business that makes money and if someone on that forum criticizes someone’s fly fishing lore, that doesn’t stop a fly fishing company that has 10, 20, 30, 40, $50 million from bullying that small fly fishing community and that just underscores the importance of 230 and of the Speech Act internationally and other pieces of legislation. That said, other than what we’ve discussed today, are there any other legislative trends that we should keep an eye on?
[00:24:08] Jess Miers: One that I am keeping an eye on specifically, and we saw this with the recent DOJ proposal, we see it with Earn It and I believe, See Something, Say Something, there’s so many of them. There’s creation of more exceptions in Section 230. Now, as people know, Section 230, has already exceptions for federal criminal law and for intellectual property and the ECPA. We’re seeing a trend where regulators are really honing in on that Section 230E and saying, okay, well, let’s make an exception for child sexual abuse material or let’s make an exception for federal civil law.
I think that’s getting to be incredibly problematic and that what we’re doing is we’re not necessarily amending the major provisions of Section 230, which is Section 230 (c)(1) one and C2 on the moderation side, but what we’re doing is we’re Swiss cheesing the law in a way and I say that meaning, the more exceptions that we’ve added, like an exception for state AGs to prosecute these services. The more exceptions that we’re carving out of Section 230, the more we’re just generally undermining the law.
The entire point of Section 230 and again, the holding that we had in the Zeran case was that it’s supposed to be this national reliable standard for Internet companies to be able to rely upon. With every single exception that keeps getting added, that standard and that shield is really starting to break down. That’s one that I’m specifically following as well
[00:25:32] Patrick O’Keefe: Talk a little bit more about the implication of that Swiss cheesing. The obvious example that might come to mind for some people is SESTA/FOSTA because there were reports of numerous communities that served sex workers shutting down rather than dealing with that. Child sex trafficking is a little different because we don’t really have communities for that that we would treat as legitimate. I’m sure there are already laws against that and in federal laws, that would come into play before Section 230 would enter the equation. What are the implications of that Swiss cheesing?
[00:26:05] Jess Miers: I’m going to actually, instead of calling it child sex trafficking, I’m going to call it child sexual abuse materials like the internet calls it. I’ll explain why. It’s a difficult explanation to have especially when we’re talking about child safety but it’s important that we’re calling it out. What essentially ends up happening as you keep building in these exceptions is that the internet services are eventually going to just say like, “Okay, well, we’re going to write to the lowest common denominator here.”
Child sexual abuse material, this is an interesting exception because we’re not talking about child sexual abuse imagery, which is more known as child pornography. Child sexual abuse material could really be anything. It’s a broad scope. What I mean by that is like take Omegle, for example, the popular you get paired up with a stranger, you can either be over video is sort of a chatroulette or it could be over text.
The facilitation or the possibility that a child anonymous user might be matched up with an adult user, could that be considered child sexual abuse material? Any situation in which a child could interact with an adult that might actually fit under the provision of CSAM. Like I said before, what’s eventually going to happen is that these internet services, they’re going to have to come all the way down to whatever the lowest common denominator is and if that’s CSAM, then they’re going to have to harden their services against basically any child being able to use the service at all.
Maybe what that means is, that’s an application where we’re doing age verification. If we’re doing age verification on services, then you have a boatload of privacy complaints. There’s always going to be some trade-off. I think on the privacy note is that again, the more that these services are trying to restrict and doing it to a point where they’re almost trying to identify their users, the more we’re going to be giving up whether that’s in we’re no longer creating material, we’re no longer seeing the creation material or we’re just giving up our identification information just to use the internet.
[00:27:52] Patrick O’Keefe: As you say, that’s a tough one for sure. I think this is the SESTA/FOSTA use case is a lot more easily explainable. The child sex trafficking is a little harder. With children, we do have COPPA, the Children’s Online Privacy Protection Act and that should theoretically help to some degree, but people lie. People lie about their age all the time. With COPPA, you respectfully wouldn’t allow anyone below 13 without the right permission from their parents. 13 to 18 might have different rules but again, people lie all the time.
I don’t know where this would go because it’s just draft legislation, obviously. There are different ways that it could be implemented. You could ban people below a certain age. There could be some sort of federal ID thing. You can get an ID before you can drive. You don’t have to have a driver’s license, you can have a photo ID as a minor and maybe that could tie into some database. I don’t know who would do that. Are there any other issues where you see this Swiss cheesing beyond SESTA/FOSTA and child sex trafficking? I feel like there might have been something around pharmaceuticals. That was a thing a while back, but I might be remembering incorrectly.
[00:28:56] Jess Miers: No. The opioid issue is common. There is an act. I want to say it’s the See Something, Say Something Act where I believe Senator Manchin is trying to combat that unlawful content point. I believe it’s Section 230(e) exception where websites are required to report suspicious transmission that you were talking about. If you actually go look at the bill and not just the Section 230 redline, the suspicious transmission gets defined as anything that could constitute a major crime taking place on the service and within major crime, they talk about drug trafficking, sex trafficking, et cetera. Again, anything that falls under a major crime.
There is likely a drug exception that’s built-in with that See Something, Say Something. Federal civil law is another interesting carve out. I believe that one came up with a recent DOJ regulations, that one would just I guess, make it easier for any of our regulatory bodies to be able to bring an action as well. For example, like there’s been a lot of disputes between Facebook and their advertising for housing, urban development, as well.
I think the main carve-outs are the main discussions that we’re really seeing, it’s going to be focusing on that child sexual abuse material lever. We already have SESTA/FOSTA, we’ve got again, federal civil law and then anything else. I think there was actually one for the unconstitutional speech at one point. I want to say that was the Loeffler bill that was saying all constitutional speech should be hosted on the service as well.
[00:30:20] Patrick O’Keefe: I’m also sensitive to the ability to put up content notices around content posted by members. Twitter’s fact-checking of Donald Trump as an example of this. It triggered his executive order earlier this year. There are use cases where it’s helpful to put up notices around content as a way of helping members to better understand something or to better use your platform or community.
When I had Anette Beebe on this show, she mentioned how Twitter’s fact-checking notices weren’t really a 230 issue, but a First Amendment issue that Twitter was adding speech, not modifying his speech, but this focus on fact-checking and on content notices is something that has caught my attention because hey, I just want to reserve the right to be able to add context without changing if there’s a good reason for doing so.
[00:31:13] Jess Miers: Yes. I think it’s actually really interesting and it’s always made me think a little bit as to whenever these bills come out that are targeting the fact-checking or the notice type of content moderation, if these regulators are ever thinking, like, “What is the alternative?” Because we’ve actually come really far from this binary content moderation where two options, right? You can leave the content up or you can take the content down.
When President Trump put out his executive order, my immediate thought was, okay, well, would you have rather they deleted the content entirely or would you have rather they banned your speech or suspended your account or kept you from being able to tweet? The irony here is that we’ve actually come so far in content moderation and in innovations, in content moderation that we’re at a point now where you kind of get to have the best of both worlds.
You get to have your speech still up and at the same time, we’re educating our users about where to get maybe better sourced information, but what they’re not doing is they’re not blocking the tweets, his base could still read them and they can also still see the other side of the coin. I’ve always just found the anti-fact-checking moderation bills really interesting because again, think about what the alternative is. We could just go back to deleting the speech entirely.
[00:32:27] Patrick O’Keefe: Given your personal brand being so strongly tied to Section 230, I’m sure you’ve heard this question before, but is there anything about 230 that you would like to change or at least that would be worth a look if we were playing fantasy lawmaker?
[00:32:41] Jess Miers: Yes, I have gotten that question a lot. My short answer is no because I am staunchly in favor of not looking at Section 230, but really drilling into what are the underlying problems with the internet and with user speech itself? I have never seen, nor do I think there will be an amendment to Section 230 that will somehow curb the way people use the internet.
I really think like when you break it down to the way in which we use and we think about and the things that we write and who we are as humans, there’s no amendment that’s ever going to change or curb the way in which humans use the internet. I do think there are better things that we can be doing though that are not related to Section 230. I’ll use Facebook as an example in that there’s a lot of things going on with the way that Facebook moderates content that can probably be a lot better.
Maybe it’s in having more mental health research so that your content moderators are able to have more quality type moderation or it’s really like thinking about how to respect your users, like making user respect the forefront and the way in which you moderate content.
Maybe not just at Facebook, but to all the Big Tech again, maybe we really do need to start thinking about being a little bit more transparent about our moderation processes and practices just so that people not only understand it and regulators not only understand it but also so that we can collaborate across industry and get better at solving the problems that people actually care about. Amending Section 230 isn’t going to fix any of the issues that we have with the internet. It’s just going to make it harder for us to fix those issues in the long run.
[00:34:20] Patrick O’Keefe: Yes. Thank you so much for spending some time with us today. I’ve really enjoyed the conversation.
[00:34:24] Jess Miers: Yes. Thank you so much for having us. It’s my favorite topic and it’s obviously incredibly relevant right now. This was great. Thank you.
[00:34:31] Patrick O’Keefe: We have been talking with Jess Miers, legal policy specialist at Google. For more on her work, visit cntrlaltdissent.com/cv. That’s C-T-R-L A-L-T dissent.com/cv. Read Jess’ Section 230 blog at medium.com/@JessMiers and we’ll link to her TEDxSantaClaraUniversity Talk on Section 230 in the show notes. For the transcript from this episode, plus highlights and links that we mentioned, please visit communitysignal.com. Community Signal is produced by Karn Broad and Carol Benovic-Bradley is our editorial lead. I’ll see you next time.
[music]
Your Thoughts
If you have any thoughts on this episode that you’d like to share, please leave me a comment, send me an email or a tweet. If you enjoy the show, we would be so grateful if you spread the word and supported Community Signal on Patreon.
2 comments
Best episode yet!
Thanks Randy! I appreciate the kind words.