The Supreme Court Just Heard 2 Cases That Could Break the Internet

The Supreme Court Just Heard 2 Cases That Could Break the Internet

The Supreme Court Just Heard 2 Cases That Could Break the Internet

At the heart of the cases is whether social media companies like YouTube and Twitter should be held accountable for their users’ posts.

Facebook
Twitter
Email
Flipboard
Pocket

Humility, thoughtfulness, and restraint are not virtues one can in good faith associate with our current Supreme Court. The past few years have seen a rash of extremist rulings from the conservatives who control the court, with the six archconservatives assuming the power to invalidate human rights, grant new rights to guns and corporations, and win the culture wars for their far-right confederates. The Supreme Court no longer acts as an impartial panel of jurists but as a cabal of rulers eager to foist their worldview upon the rest of us. 

But this week, the nine justices appeared to suddenly remember their limitations. When presented with two cases that could allow the Supreme Court to redefine how social media and the Internet itself function, not only in the United States but all across the globe, the court decided to deliberate more like a court and less like a junta. The upshot was two consecutive arguments during which the justices seemed to drop their ideological armaments and simply wrestle with the cases in front of them. 

That’s refreshing, because the cases themselves are complicated enough without worrying about how Elon Musk might manipulate them to make eight bucks and reelect Donald Trump. At issue were two deadly terrorist attacks carried out by ISIS—and the attempts by families of some of the victims to hold Internet companies like YouTube, Facebook, and Twitter accountable for giving those terrorists a platform to recruit members to their cause. In the first case, Gonzalez v. Google, families argued that algorithms used by social media companies like YouTube, which suggest additional content to view, helped ISIS spread its message and radicalize potential terrorists. In the second case, Twitter v. Taamneh, families argued that the failure of these sites to take down terrorist propaganda also contributed to the violence that killed their loved ones. 

Taken together, the cases challenged Section 230 of the Communications Decency Act of 1996. By acclamation at this point, Section 230 is the law that makes the Internet possible. Put simply, it relieves Internet companies of liability for otherwise defamatory or illegal content posted through the use of their services, and instead places liability with the user who makes or posts the content. It might seem like an obvious rule, but Section 230 treats Internet companies fundamentally differently from other kinds of media outlets, like newspapers or book publishers. If The New York Times published ISIS recruitment videos on its website, The New York Times would be liable (absent any First Amendment protections the paper might have). But if the same video were posted on Twitter, ISIS would be responsible, not Musk. 

The reason for this goes back to the era when the law was written. In 1996, Internet companies could not reasonably review and police every stitch of content posted on their message boards or comments sections. Of course, in 1996, you also couldn’t use the Internet if somebody else in your house were on the telephone. Things are different today. Arguably, Internet companies do have, or will have, or could invent,  the algorithms necessary to review everything posted on their websites or applications, and they certainly have the ability to do something as basic as taking down posts from terrorists when asked. Gonzalez v. Google seeks to get around Section 230, and, if successful, Twitter v. Taamneh seeks to punish these companies for their failure to remove terrorist content. 

Piercing Section 230 in the ways the families suggest would do nothing short of changing the entire way the Internet and social media function. The thing is, we don’t know which way it would change. Maybe these companies would abandon any content moderation at all, creating a Wild West situation where there are no rules so nobody can get in trouble for violating them; conservatives who get their rocks off by saying the n-word certainly hope for that. But maybe social media companies would adopt Orwellian levels of surveillance and censorship, excluding a lot of innocent and harmless people from their platforms; progressives worried about fascist-sympathizing tech giants wouldn’t want to give them that kind of power. 

More to the point, maybe unelected judges should not be the ones making this decision for our entire society.

During oral arguments, it sounded like the justices were trying to avoid having to make any grand ruling about Section 230 that would change the way the Internet functions. On the first day of arguments, during the Gonzalez v. Google hearing, Justice Elena Kagan spoke what might have been the sanest line of either hearing. While going back and forth with the lawyer for the families about who should get Section 230 protection, she argued that Congress should be the one to make that decision. She said: “We’re a court. We don’t really know about these things. These are not the nine greatest experts about the Internet.” 

When I posted that quip online, many responded that the members of Congress are also not the world’s greatest experts on the Internet. That’s true, but it slightly misses Kagan’s point. We can have a society where social media removes “bad” content (however defined), or one where it doesn’t, but we, through our elected representatives, get to choose which kind of society we want to live in. The law is, or should be, agnostic as to which one is “better.” That’s not for courts to decide. 

What courts can decide is what the law already says. And that proper and restrained view of the justices’ roles led the court away from Republican senators’ frothing about Section 230 and toward the actual laws about liability for terrorism. Specifically, the court spent most of its time in both cases arguing about the Justice Against Sponsors of Terrorism Act (JASTA). That is a 2016 law that allows private US citizens to sue anyone who “aids and abets, by knowingly providing substantial assistance” to, anyone who commits acts of international terrorism. 

Finding liability for Internet companies under JASTA would be just as tumultuous to how the Internet works as rewriting Section 230. That’s because the civil penalties under JASTA call for “treble damages” if a person or company is found to have aided and abetted international terrorism, and those kinds of fines could amount to even more than our tech oligarchs can afford. But it’s far from clear that failing to take down ISIS posts (the Twitter case) or having the algorithm serve up one of those videos in a queue of suggested viewing (the Google case) constitute “substantial assistance” to ISIS. The arguments put the lawyer representing Twitter, Seth Waxman, in the uncomfortable position of having to say that not everything ISIS posts is terrorism… but it’s true. ISIS, and ISIS supporters, post a lot of things, including cat pics. But if you think there is a bright line between “terrorist recruitment” and “cat pics,” know that ISIS is posting pictures of their fighters with cats to soften their image so it’s easier to recruit new members. Does failing to take these things down constitute “substantial assistance” to global terrorism or specific terrorist attacks?

The justices weren’t so sure. Kagan probably represented one pole when she argued that these were “fact” questions that could be sorted out by a jury, which would mean the lawsuits against the social media companies could continue to move forward in lower courts (at great risk to their businesses). Justice Neil Gorsuch seemed, on the other end, to argue that JASTA required that the assistance be given to the actual people who committed the terrorist act, not simply allowed their organization to avail themselves of common tools. His view would stop these lawsuits from moving forward. But all of the justices seemed to be really wrestling with these questions because “substantial assistance” is a phrase that is open to a lot of interpretation. 

Which brings us back to Congress and its almost stubborn refusal to do its job. Section 230 is an old law that Congress should have updated multiple times as the Internet and social media developed. JASTA is a new law (passed over the veto of President Barack Obama, by the way) that is maddeningly unclear about its key requirement for liability. It’s really not too much to ask Congress to define with clarity what constitutes aiding and abetting terrorism. It’s really not too much to ask Congress to decide whether social media companies should be responsible for their terrorist users. Before we get into the subtleties of how many Josh Hawley jokes I’m allowed to make before he skips away and tells Mommy Musk to banish me to Mars, can we first pin down the rules about terrorism recruitment videos? 

If only we had some sort of system where we could all vote on what kinds of rules and regulations we want to place on Internet companies, and then have the tech giants conform to the rules for the society we collectively decide to live in. That would probably be best. Could somebody ask Tim Cook if there is an app for such a thing?

While we wait for China to reverse-engineer democratic self-government and sell it back to us through TikTok, I have no idea what the Supreme Court will do. For the first time in a long time, I don’t know how the court will rule on these two cases, because for the first time in a long time the court didn’t sound eager to be in a position to remake society. The justices really don’t want to break the Internet. They also don’t know how to fix it.

Thank you for reading The Nation!

We hope you enjoyed the story you just read. It’s just one of many examples of incisive, deeply-reported journalism we publish—journalism that shifts the needle on important issues, uncovers malfeasance and corruption, and uplifts voices and perspectives that often go unheard in mainstream media. For nearly 160 years, The Nation has spoken truth to power and shone a light on issues that would otherwise be swept under the rug.

In a critical election year as well as a time of media austerity, independent journalism needs your continued support. The best way to do this is with a recurring donation. This month, we are asking readers like you who value truth and democracy to step up and support The Nation with a monthly contribution. We call these monthly donors Sustainers, a small but mighty group of supporters who ensure our team of writers, editors, and fact-checkers have the resources they need to report on breaking news, investigative feature stories that often take weeks or months to report, and much more.

There’s a lot to talk about in the coming months, from the presidential election and Supreme Court battles to the fight for bodily autonomy. We’ll cover all these issues and more, but this is only made possible with support from sustaining donors. Donate today—any amount you can spare each month is appreciated, even just the price of a cup of coffee.

The Nation does not bow to the interests of a corporate owner or advertisers—we answer only to readers like you who make our work possible. Set up a recurring donation today and ensure we can continue to hold the powerful accountable.

Thank you for your generosity.

Ad Policy
x