Section 230 is in the midst of political debate over internet regulation

There is an old saying, “How can you tell when a politician is lying? His lips are moving.” Nowadays, one might ask, “How do you know when Section 230 is being misunderstood?” And the answer, “A politician talking about it.”

Adopted in 1996, Section 230 Internet speech was proposed as a means against attempts to censor. Its author, then-rep. Chris Cox and Ron Wyden, did this by walking a fine line. Their legislative language promotes the development of parental controls and filtering as alternatives to government censorship, and encourages online platforms to allow communication free from liability for hosting speech by third parties. Crucially, Section 230 also ensured the ability of online platforms to moderate posts that violate their terms of service.

Later this month, the Supreme Court will consider how to interpret Article 230. Gonzalez v. Google The 2015 Islamic State terrorist attacks in Paris involved family members of victims. They claim YouTube (owned by Google) “aided and abetted” crimes by allowing Islamic State to use the video platform to recruit members and communicate its messages. It allegedly contributes to terrorist activities, as YouTube automatically recommends content based on users’ viewing habits.

Gonzalez v. Google It drew widespread attention because it represented the first opportunity for the Supreme Court to consider the law—and Section 230 has been at the center of a broader political debate over Internet regulation for years.

Bilateral Enmity of Article 230

Strong opinions about Article 230 are common on both sides of the aisle. Just before losing the 2020 election, then-President Donald Trump put it plainly on Twitter: “Abolish Article 230!!!” He also issued a executive order That led to a Federal Communications Commission move to “reinterpret” Section 230, and by the end of 2020, he Vetoed the National Defense Authorization Act Partly because it did not include repealing provisions.

Such animosity does little to distinguish Trump from President Joe Biden, who said The New York Times Editorial Board before the 2020 elections “Section 230 should be withdrawn immediately.” Not much has changed since he took over. Biden A White House used “listening sessions”. To make a similar point last fall, and in January, he published An op-ed piece The Wall Street Journal In which he claimed, among other things, “We must fundamentally reform Article 230.”White House renews call to ‘remove’ Section 230 liability shield – POLITICO

But while progressives and conservatives are united in their hatred of Section 230, they attack the law for different reasons—all misguided. as a report Bloomberg Put it this way: “Democrats say too much hate, election meddling and misinformation pass through, while Republicans claim their ideas and candidates are censored.” In other words, liberals generally attack the part of Section 230 that protects online companies from liability for third-party content they host, while conservatives want to weaken the provision that guarantees online platforms’ ability to enforce their own terms of service.

What they have in common is that both sides want to increase the government’s ability to regulate perhaps the most influential form of communication — a rare example of bipartisan agreement. Progressives suggest amending or repealing Section 230 to provide incentives to privately owned platforms to restrict inaccurate or harmful content. Conservatives, on the other hand, favor amending or repealing Section 230 to make it more vulnerable to companies claiming that conservatives like content that is being “unfairly” moderated.

The monster under the bed is of course “Big Tech”—another convenient political label—and the framing of this issue encourages a variety of narratives as to why Section 230 reform is supposedly needed.

One claim is that Section 230 is an outdated law, passed in the mid-1990s when the Internet was just emerging, and Congress must update it to keep pace with technology and its unimaginative uses. Another is that Section 230 is a convenience Congress enacted to foster fledgling Internet businesses that have become behemoths and no longer need such support. A more cynical version is that Section 230 is another cheat in the great Washington game of carrot-and-stick that lawmakers can manipulate to condition behavior, justified by forcing tech companies to “earn” their legal protections. The political logic is crude, but usually effective: if you know we can inflict pain, you will do whatever you want.

Few people in 1996 could have imagined what the Internet would become within a generation. At the time, less than 15 percent of the US population even used the Internet. Search engines were just becoming a thing. The term “social media” was still years away from common parlance; Eight years later Facebook would not emerge. Even the iPhone was more than a decade away from launch, and almost all the platforms that now keep people’s noses glued to their screens were far on the horizon.

We need Section 230 now more than ever

Rather than rendering Section 230 “antiquated,” this dramatic evolution underscores the necessity of the immunity the law provides.

Even in the early stages of the Internet’s development in 1997, the first federal appeals court was interpreted to consider the scope of Section 230 immunity. Geran v. America Online, Inc. Why it provides essential protections for freedom of expression online. The US Court of Appeals for the 4th Circuit observed that service providers “must make on-the-spot editorial decisions as to whether they risk liability by permitting” their inability to screen each of the millions of postings they may host. [their] continued publication” or otherwise “provide natural incentives to remove messages after notification, whether the content was [unlawful] or not.”

Simple math dictates the result: If there’s even the slightest possibility that you could be held legally accountable for what you allow people to post on your platform, you don’t risk it.

Time and technology have not changed this essential calculus—except to make it more compelling. Compared to the millions of postings envisioned by courts that first interpreted Section 230, online platforms must now assess their potential liability risks in the billions. To take just one example, users upload Over 500 hours of third-party content on YouTube every minute. It works 30,000 hours of new content every hour and 720,000 hours every day.

Of course, these huge platforms use sophisticated algorithms to help screen what is posted, but this fact does not affect the underlying logic of Section 230. The bigger the platform, the greater the risk of liability—and the greater the need for protection.

Politicians cannot abide by what they see as beyond their power to control. The Internet caught Congress unaware, and it’s been trying to play catch-up ever since. The default position for governments to exercise authority over any new medium is to find ways to censor it. Congress first adopted A measure to prohibit “indecent” communication online (ironically, as part of the same law that included Article 230), but the Supreme Court declared that provision unconstitutional in 1997. Congress dusted itself off and tried again next year. Child Online Protection Act But it, too, was struck down in 2008 as a violation of the First Amendment.

Section 230 was an exception to the Legislature’s reflexive response to any new communication medium, and was based on a clear principle that the statute described as promoting freedom of expression by preserving “the vibrant and competitive free market that currently exists for the Internet and other interactive computers.” Service, unrestricted by federal or state regulation.” It is said to be the most successful federal policy decision for the Internet to date no to control it

Given this background, it should send up a few red flags when considering how many of the current proposals to regulate social media and “reform” Section 230 are billed as a mechanism to protect free speech on the Internet.

Future of Article 230

It’s not that the Internet doesn’t have problems, or that some big tech companies haven’t hampered their efforts to manage the flow of online traffic to their platforms. There is real cause for concern when platforms make moderation decisions about what speech is allowed in those forums

These decisions can be maddeningly opaque and arbitrary — and unless you’re Donald Trump, you probably don’t have the option of galloping off to start your own social media platform. But face the reality that someone While those decisions must be made, the question is how to do so in a system dedicated to protecting freedom of expression.

The problem with free speech is not that countless platforms have different ways of interpreting and applying their house rules It is that governments at various levels are looking for ways to horn in on business.

Last fall, Twitter chief Elon Musk began publishing through a network of journalists, known as Twitter file, detailing efforts by various federal authorities to decide or push for speaker bans on issues such as the January 6 uprising, Hunter Biden’s laptop, Covid policy and a range of other topics. While it is fair to criticize Musk for selectively making this information available to journalists sympathetic to his position, the problem is a serious one. If unexplained moderation decisions by private businesses are a cause for concern, you should really be concerned if the person behind the scenes is with the government.

Dozens of bills introduced to amend or repeal the law seek to reveal what has usually been kept secret: handing over to the government control over various rules of what is posted online. In some cases, lawmakers introduce bills mainly as threats to big tech companies not to play ball, just to show them who’s boss. Either way, the goal is to formally or informally assert government authority over the most powerful media in history.

Because of the partisanship, legislative scope is likely remote, meaning the Supreme Court has the potential for immediate changes to the scope of Section 230 immunity. When judges consider Gonzalez v. Google Later this month, will they look at automated recommendations that algorithms make as an extension of editorial choices about how information is presented and thereby protected under Section 230? Or will they see such recommendations as falling outside the immunity shield of the law? If the Court decides Section 230 immunity should be narrowed, it will ignore the settled expectation set by hundreds of lower court decisions and transform the way online platforms operate in making any recommendations.

Although the Court has construed Section 230 Gonzalez v. GoogleAn even bigger challenge to online free speech is likely to come next term in whether the First Amendment will allow Florida and Texas to regulate political speech on the Internet.

The stakes couldn’t be higher. These cases will test the limits of what the Supreme Court means Packingham vs. North Carolina in 2017, when it warned that courts must exercise “extreme caution” before approving efforts to regulate online speech. They will also examine the underlying assumptions that motivated passage of Section 230 in the first place: that the Internet flourished because it was unfettered by federal or state regulation.

The alternative would be to leave the future of free speech in the hands of politicians. I shudder to think.

The author of this piece submitted an amicus brief supporting Google Gonzales v. Google On behalf of the Chamber of Progress.