Posts Tagged ‘America Online’

Section 230

December 4, 2019

The title of this post is identical to he title of one of the fixes needed for the internet proposed in Richard Stengel’s informative work, Information Wars. New legislation is needed to create an information environment that is more transparent, more consumer-focused, and makes the creators and purveyors of disinformation more accountable. Stengel calls this section legislations’s original sin, the Communicates Decency Act (CDA) of 1996. The CDA was one of the first attempts by Congress to regulate the internet. Section 230 of this act says that online platforms and their users are not considered publishers and have immunity from being sued for the content they post. It reads No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider. Congress’s motivation back in 1996 was not so much to shield these new platforms as to protect free speech. Congress didn’t want the government to police these platforms and thereby potentially restrict freedom of speech—it wanted the platforms to police themselves. Congress worried that if the platforms were considered publishers, they would be too draconian in policing their content and put a chill on the creation of content by third parties. The courts had suggested that if a platform exercised editorial control by removing offensive language, that made it a publisher and therefore liable for the content on its site. The idea of Section 230 was to give these companies a “safe harbor” to screen harmful content. The reasoning was that if they received a general immunity, they would be freer to remove antisocial content that violated their terms of service without violating constitutional free speech provisions.

Focus on the year this act was passed. This was the era of America Online, CompuServe, Netscape, Yahoo and Prodigy. That was a different world and there was no way to anticipate the problems brought by Facebook. Stengel notes that Facebook is not like the old AT&T. Facebook makes money off the content it hosts and distributes,. They just call it “sharing.” Facebook makes the same amount off ad revenue from shared content that is false as from shared content that is true. Note that this problem is not unique to Facebook, but perhaps Facebook is the most prominent example.

Stengel continues, “If Section 230 was meant to encourage platforms to limit content that is false or misleading, it’s failed. No traditional publisher could survive if it put out the false and untrue content that these platforms do. It would be constantly sued. The law must incentivize the platform companies to be proactive and accountable to fighting disinformation. Demonstrable false information needs to be removed from the platforms. And that’s just the beginning.”

Stengel concludes this section as follows: “But let’s be realistic. The companies will fight tooth and nail to keep their immunity. So, revising Section 230 must encourage them to make good-faith efforts to police their content, without making them responsible for every phrase or sentence on their services. It’s unrealistic to expect these platforms to vet every tweet or post. One way to do this is to revise the language of the CDA to say that no platforms that make a good faith effort to fulfill its responsibility to delete harmful content and provide information to users about that content can be liable for the damage that it does start. It’s a start.”