Basic Information | Prostitution | Trafficking | Pornography | Racism, Colonialism | Sex Buyers (“the demand”) | Pimps/Traffickers | Online Prostitution & Trafficking | False Distinction Between Prostitution & Trafficking | Health Impacts: Mental & Physical | Law and Policy | Pop Culture & Media Sexism | Children & Prostitution | Sex Self-Identity | Survivors’ View Blog | Traffic Jamming Blog

Mr. Obama, Tear Down This Liability Shield

Online trolls have launched another barrage of attacks in the strange, petty little war over “ethics in journalism” we call GamerGate.

Perennial troll targets Anita Sarkeesian and Zoe Quinn caused the latest escalation by testifying to the UN about the toxic effects of online harassment and the need for something to be done about it.

As a result, we’ve seen a wave of bizarre, hysterical conspiracy theories claiming that very soon the Internet will become a centralized service ruled by UN censors with an iron fist. (Something that is logistically, technologically and legally impossible — and if it were possible, would not suddenly happen overnight because of GamerGate.)

It didn’t help that the United Nations was characteristically less-than-competent at drafting a paper to address the issue of online harassment.

Most of the useful things Sarkeesian and Quinn said at the panel were undermined by a slipshod and poorly-thought-out report presented alongside it.

Again, the idea that the US government — or any other government — would have the legal authority to serve as a centralized “licensing board” with the power to shut down websites they deem to be abusive is legally and ethically troubling and, more importantly, utterly unworkable.

It’s nearly impossible to apply the same logic by which the FCC regulates the “public airwaves” to Internet services, which exist on privately owned servers and clients that communicate over privately owned cables, phone lines and fiber optics.

It’s nearly impossible to apply the same logic by which the FCC regulates the “public airwaves” to Internet services, which exist on privately owned servers and clients that communicate over privately owned cables, phone lines and fiber optics.

And after revelations about the shady actions the federal government is already taking with our data, I doubt any of us are in a hurry to grant the federal government direct power over which websites stay up or which ones shut down.

But by attacking this overly radical proposal in the article I linked above, Caitlin Dewey at the Washington Post conflates two different issues. Contrary to what free-speech absolutist organizations like Wikileaks claim, we do not face a binary choice between creating a centralized regulatory authority and shrugging our shoulders and saying the laissez-faire cesspit that is the modern Internet is none of our concern.

We have, here in the United States, a system by which wronged parties can seek redress from those who wronged them, and those who willfully enabled that wrong, without proactive control by government bureaucrats. It’s one that even ardent libertarians imagine as being part of how their ideal “small government” would work. And it’s a highly American tradition: one that’s been identified as central to American culture since the days of Alexis de Tocqueville.

I’m talking, of course, about lawsuits. Civil litigation. Bringing in the lawyers.

Right now you can’t sue digital platforms for enabling harassment on their services, even if they enable harassment through flagrant, willful neglect. If your harasser is able to take fairly basic steps to keep himself anonymous — and if the platform he chooses enables and enforces that anonymity — then there is literally nothing you or the government can do, even if his actions rise to the level of major crimes like attempted murder.

Closing this loophole wouldn’t require giving the Internet “special treatment” compared to other forms of communication. Nor would it require a sudden, major deviation from the standards of free speech most of the developed world respects.

It would require the exact opposite — it would require the United States to remove a law that specifically mandates special treatment for Internet service providers and platforms that no other communications medium has.

Far from turning us into China or North Korea, it would bring the United States into line with every other developed country in the world, including our close allies in Canada and the UK. It would remove the competitive advantage that keeps most social media companies in the US, despite the talent and capital in other nations. This advantage is a law that makes us a liability haven.

With the rapid, massive scaling-up involved in Web 2.0, social media companies have decided as long as they can’t get sued, the costs of enforcing their terms of service outweigh the benefits.

The law is called Section 230 of the Communications Decency Act. It was passed in 1996, when the Internet was still a novelty rather than an integral element of commerce and daily communication for pretty much all Americans. It was passed at a time when “doxing” and “swatting” had in fact happened before, but were not yet known by those names thanks to the practice only being relevant to a small community of self-identified nerds.

It is long past time it was repealed.

Section 230 of the CDA, paradoxically, initially existed to encourage online platforms to be proactive about filtering, blocking and sanitizing content.

It was drafted in response to a past court case wherein the online service Prodigy was found to meet the definition of a “publisher” because they were capable of taking down specific message board posts and had in the past done so –therefore they were held to be liable for a libelous post that they failed to take down.

Section 230 was added to the CDA as part of the CDA’s overall goal to “clean up” the Internet from obscene materials. By making it “safe” for online services to enable filtering or blocking without creating legal liability for everything posted on the site, they hoped to spur the advancement of content filtering technologies, reasoning that keeping the bad stuff off your site could only be in the long run good for business.

In a great historical irony, most of the CDA was overturned as unconstitutional, and the part that remains, Section 230, has taken the role of preventing online services from cleaning up their content. Because, it turns out that harassing, destructive content is profitable. With the rapid, massive scaling-up involved in Web 2.0, social media companies have decided as long as they can’t get sued, the costs of enforcing their terms of service outweigh the benefits.

No one in 1996 predicted the 2000s would see the massive influx of “user generated content” that defines Web 2.0. No one foresaw the incredible profitability of a business model based on creating no content at all of your own, but instead monetizing clicks on your users talking to each other.

I’m sure the judge who decided the Zeran v. AOL case never expected that Mr. Zeran’s story would soon become an endemic feature of life in the 2010s.

A troll provoked mass harassment of a random individual through libel and doxing and AOL was clearly, willfully negligent in refusing to do anything proactive about the trolling, simply because they did not take the time or energy to track down the anonymous troll and disable his account.

At the time, this probably seemed like one of those weird, wacky “only on the Internet” stories that served as a cautionary tale that people shouldn’t “go on the Internet” unless they’re “Internet-savvy”. At the time, preserving the principle of Section 230 must’ve seemed like the important thing.

Now people get doxed every day, and every day SWAT teams are weaponized to destroy property and put people’s lives at risk.

Now “Don’t go on the Internet” is as ridiculous advice as “Don’t use the telephone” would’ve been in 1996, or “Don’t use the mail” would’ve been in 1916 — to sever oneself from Internet services would mean severing oneself from where most social interaction and economic activity takes place.

Social media companies, however much their marketing departments may instruct them to talk a big game about being anti-abuse, have an active financial disincentive to actually be anti-abuse.

We have companies that, whatever their intentions were at first, found that the way to attract big bucks from investors was to demonstrate exponential user growth as early and as rapidly as possible. This means that, unlike the days when online services made money by directly getting users to pay for things, more posts, more tweets, more clicks — more “engagement” — is directly profitable for social media companies no matter what the nature of the engagement is.

Abuse can never have direct, immediate costs as long as the possibility of lawsuits are off the table, and truly robust anti-abuse initiatives break the myth of tech startups being exponential money-printing machines.

The wonderful “scalability” of writing a little bit of code and getting a whole ton of “engagement” in return breaks down once you have serious anti-abuse measures because unfortunately policing against abuse can still only be done by real human beings. Hence “community management” becomes a job that’s as understaffed and underpaid as the company reasonably feels like they can get away with.

And now we clearly see that harassment has what anti-discrimination lawyers call a “disparate impact.”

When content curation is more about keeping up superficial appearances than avoiding genuine liability, it can be as shallow as you want it to be — hence the infamously disparate response times between a celebrity Facebook or Twitter user’s complaints being addressed than a mere mortal.

White men, who made up most of the visible userbase in 1996, came into Web 2.0 with a sense of intrinsically belonging there; women and minorities, by contrast, get treated as outsiders, blasted with far worse harassment for speaking out and more likely to be brushed off when they complain about it.

To borrow another phrase from discrimination law, the Internet is a fundamentally “hostile work environment” for women and minorities who spend time online, but there’s no entity who can be held responsible for it.

Dealing with abuse becomes part of the hidden tax that anyone who tries to work in media and tech as a non-white-dude ends up paying in time and energy. To switch back into tech jargon, online abuse has become an unending series of denial-of-service attacks aimed at humans rather than machines, and disproportionately targeting women. (To say nothing of literal denial-of-service attacks.)

I remember watching in 2007 as one of the first high-profile harassment lawsuits against anonymous trolls on the “modern” Internet unfolded — the anon troll board, AutoAdmit, catered to an audience of law students and primarily targeted female law students for harassment.

The board was notorious for being a place for trolls to gather and talk shit about people they chose to target for the explicit purpose of ruining their reputation and their lives. The admins had been specifically informed of and were well aware of the damage the abusive posts were doing, but refused to take them down, and did not cooperate at all in seeking to reveal the real identities behind the abusers’ pseudonyms.

If there had been any possibility of an “exception” in case law to the interpretation of Section 230 as a catch-all liability shield, that would have been the time. But it didn’t happen. Section 230 held firm. The admins were dropped from the suit. Afterwards the lawsuit largely fizzled thanks to it being exceedingly difficult to take someone to court when all you know about them is they posted under the handle “HitlerHitlerHitler.”

Since then, people have only gotten bolder.

We now have sites like 8chan — refuges for people who find even the notorious 4chan too censorious for them — that openly provide cover for users who dox federal judges. We have violent terrorist organizations like ISIS openly using the Internet as a recruitment tool. We have major sites like Reddit proudly proclaiming themselves to lack not just any legal but also any moral obligation to not participate in a sex crime.

It can’t go on like this. The EFF, an organization I generally respect, put forward a spirited defense of Section 230 in 2012, saying that without Section 230 those wonderful viral-growth services like Facebook, Twitter, and YouTube couldn’t exist in their current form. It goes on to argue that individual bloggers are protected by Section 230 from liability for their comments sections.

It ignores that Facebook, Twitter and YouTube are excellent tools for stalking, harassment, defamation and all manner of harm, that lives have been lost, careers destroyed, money thrown down the drain because of unaccountable users using unaccountable platforms. It ignores that the whole unquestioned “tradition” of the unmoderated comments section has led to a tradition of trolling, vitriol and lies that make the Internet a worse place and make bloggers who host them worse off.

The EFF warns us how much costlier the Internet would be if we had to pay for lawyers, content managers and filtering tools all the way at the very beginning of a social media startup’s lifespan to protect against potentially fatal lawsuits. They raise the spectre of the bounteous wonderland of a Web 2.0 filled with “free” content going away, being replaced by subscription fees or microtransactions.

The Web as it currently exists is already costly, tremendously so. The cost is just mostly borne by people the tech world doesn’t regard as particularly valuable.

I would reply that the Web as it currently exists is already costly, tremendously so. The cost is just mostly borne by people the tech world doesn’t regard as particularly valuable — the teenager bullied into suicide; the activist doxed and forced to flee her homethe lawyer whose professional reputation is ruined by libelthe idealistic tech visionary who abandons her career because the daily emotional grind is eventually too much to take.

Section 230 is nothing more or less than an open declaration by the government that it is unfortunate and vulnerable users who have to bear these costs, and — unlike any other kind of publisher, unlike people who print books or print newspapers or air TV shows — the people who dole out the power to instantly publish anything they want online should bear no responsibility or risk. Even if it’s revenge porn, even if it’s a phone number or address, even if it’s an open death or rape threat.

I’m no libertarian, but I was taught that a core value of libertarianism is “personal responsibility”, which is empowering individuals to seek redress against people who harm them through the courts or through the market, rather than relying on government regulators preemptively keeping them safe.

I’m not calling for a new law to be passed or a new agency to be created. I’m calling for a law to be repealed. I’m not calling for Internet users to be singled out. I’m calling for the Internet to not be singled out, for the artificial and stupid shield between the Internet and the “real world” that enables the Internet to be a lawyer-free zone and thus a massive unaccountable sewer of abuse to be torn down.

The year is 2015, and for over a decade now things have been going from bad to worse. How much worse do they have to get before we act?

Mr. Obama: Tear down this shield!

Arthur Chu is a lifelong geek who catapulted into notoriety in 2014 for “hacking” the game show Jeopardy! and has been commenting about nerd culture in the digital age ever since.

Why Social Media Companies Aren’t Liable for Abuse on Their Platforms

Of everything I’ve written–and I’ve covered some pretty heavy, controversial topics–I don’t think I’ve ever gotten as much blowback as when I advocated the amendment or repeal of Section 230 of the Communications Decency Act.

For most people Section 230, sometimes called the Good Samaritan Clause, is an obscure piece of legislation, but for those of us who live much of our personal or professional lives online it’s one of the most significant laws on the books. Nearly every problem we have with finding solutions to online abuse can be traced back to this law or to the spirit that lies behind it.

Section 230 of the CDA is, essentially, a declaration of neutrality for platforms. It states that if a company does not actively participate in the creation of content–if all it does is provide a venue for someone to express themselves–then the company is not liable for that content. It doesn’t matter how actionable that content is–how clearly a given utterance constitutes libel, or harassment, or incitement to violence. You can sue the person who said it, if you can track them down, but the service provider–Facebook, or Twitter, or YouTube–bears no responsibility and has no duty to compensate the victims or to take anything down.

It’s pretty easy to see how that fact, by itself, makes abusive online behavior a nearly unsolvable problem. Tracking down and de-anonymizing individual abusers is difficult work, it’s likely to be unrewarding for any lawyer who takes on a case, it’s low priority for law enforcement. The very nature of crowdsourced harassment–thousands of people piling on a single person, each one contributing a snowflake to the avalanche of abuse–makes taking action against individuals a Sisphyean task.

We aren’t talking about edge cases here: We’re talking about cases of clear-cut widespread defamation that directly harmed people’s businesses and possibly put their safety in danger, from the libel campaign to smear Kenneth Zeran as a domestic terrorist in Zeran v. AOL, the case that defined the concept of Section 230 protection, to the long defamation career of notorious troll Joshua Goldberg, who was only caught because he made the mistake of impersonating an Islamic terrorist–one of the few groups our government actively spends resources looking for.

The very nature of crowdsourced harassment makes taking action against individuals a Sisphyean task.

The argument is always the same–yes, terrible things happen online, and yes, Section 230 makes platforms’ responsibility to keep those things from happening entirely voluntary, but that’s the price of freedom. The clear lines we can draw connecting the profits made by social media companies from generating as much activity and “engagement” as possible and how those design choices feed into abusive behavior–that’s the price of having social media companies in the first place.

In this Section 230 of the CDA resembles nothing so much as the 2005 Protection of Lawful Commerce in Arms Act, a law much welcomed by gun manufacturers that the normally-stalwart progressive Bernie Sanders is now coming under fire for supporting. The PLCAA absolves gun manufacturers from any and all legal responsibility for the deaths and injuries caused by their product. Gun manufacturers have no incentive to try to limit the number of guns on the street or take measures to prevent their being sold to criminals. In fact, they have an incentive to do the opposite–an environment in which gun violence is common and the only realistic response available to people at risk of being shot is to buy their own gun to protect themselves is, for the gun sellers, ideal.

Such is the price of the Second Amendment and our freedom to bear arms. Note that we’re not even talking about making gun manufacturers pay full restitution for every single unnecessary death caused by firearms, though for those of us who lack a religious devotion to the Second Amendment we might ask why that’s so unreasonable.

The problem is PLCAA stands in the way of even more moderate solutions. Creating a legal “safe harbor” for gun manufacturers in return for following basic best practices–like having a vetting process to make sure guns are only sold to reputable dealers, like requiring guns to have safety locks, like having robust systems in place to track and trace firearms after they’re sold–is impossible as long as PLCAA protects manufacturers from all liability in the first place. There’s nothing to threaten the companies with, no incentive for them to voluntarily comply–and there’s plenty of incentive not to, given that by necessity any moves to curb gun violence will cost money and will reduce the total number of guns sold.

The issue of online abuse, defamation and harassment is not as immediately life-or-death as gun violence, true. But it’s a similar dynamic. People have brought up, over and over again, best practices that major social media platforms could adopt to limit the spread of defamatory information, to reduce the risk of large-scale harassment and to curb mob behavior.

But all of these would cost money in terms of hiring personnel or spending time working on software fixes. And all of them would, in the short term, reduce “engagement”–they would increase the effort necessary to get more eyeballs on more content as quickly as possible. Even if nasty incidents on a platform harm that platform’s PR and limit its long-term growth–just as increased gun violence and crime make gun manufacturers look bad–corporate executives will still readily sacrifice those long-term interests for short-term growth. That is, after all, how they get paid.

Tools are force multipliers–absent any external intervention or bias, they amplify whatever power dynamic already exists.

Even the dynamic where gun proliferation drives more proliferation as people enter an arms race to protect themselves has parallels online. As online communities become increasingly pathological, it becomes common for anyone remotely high-profile to join in the game of combatively calling out other people, discrediting them and siccing abuse on them in perceived self-defense. The platforms’ own moderation teams are understaffed, underpaid and inattentive and legal threats are toothless and laughed off so many people see no other option but to fight fire with fire–to make it clear that messing with them will cause an ugly backlash and therefore encourage harassers to find a softer target.

All of this sucks. It sucks because, as always seems to be the case in laissez-faire unregulated environments, attack is easier than defense–guns are cheaper and more effective at killing than bulletproof vests are at blocking bullets, salacious and malicious gossip about your enemies will go viral much faster than any defense of yourself–and so absent any external governing authority the environment becomes all attacking, all the time.

Neutral tools aren’t really neutral. Tools are force multipliers–absent any external intervention or bias, they amplify whatever power dynamic already exists. Tools that enable violent impulses–be they physical or social–empower those inclined to violence. Yes, guns don’t kill people, people kill people–but guns disproportionately empower people who are killers. Words can be used to bring truth to light or muddle it, to build up communities or tear them apart–but a media governed by nothing but people’s impulses and the logic of the market will always tear down before it builds up.

No policymaker is consciously on the side of the “trolls,” any more than anyone is consciously on the side of petty criminals or deranged mass shooters. But when policymakers decided to be “neutral” and let the logic of the market and human nature take its course, they inevitably empowered the worst actors at the expense of everyone else.

That’s the harsh truth that people who still have faith in neutrality and invisible hands and whatnot have to face: When you design a system, eventually you must take responsibility for the effects that system has. No one else can.

https://wmcspeechproject.com/2016/03/18/why-social-media-companies-arent-liable-for-abuse-on-their-platforms/

Arthur Chu

Voiceover artist. Stage actor. Freelance writer. Public speaker. Professional pot-stirrer and opinion-haver. Oh yeah, and an 11-time Jeopardy! champion.

 

Download document (8 pages)