Jun. 2, 2022
Twitter could become a safe haven for cyberbullies
The public debate around free expression and censorship ramped up when COVID skeptics, election deniers and conspiracy theorists hijacked the major platforms to further their agendas.
If Elon Musk ends up owning Twitter, it could be like the fox guarding the henhouse. Musk has an appalling track record of inappropriate, hostile and injurious posts, and he is unlikely to crack down on the very behavior he models. Lawmakers, policy experts and parents have bemoaned the online victimization of children and other vulnerable individuals, but the person potentially taking Twitter’s helm could be one of the worst cyberbullies on record.
Regulation of social media platforms is contentious at best, but there appears to be remarkable consensus around cyberbullying. When a Facebook whistleblower testified before Congress last October about the horrific impact on children of this activity, legislators were shocked and appalled. If children deserve protection, why not all who are subjected to cyberbullying?
Musk has pledged to make Twitter an open and free forum for the exchange of ideas, no matter how controversial or injurious, ostensibly rolling out the red carpet for cyberbullies. And current laws, specifically Section 230 of the Communications Decency Act, provide him with plenty of protection. When Section 230 was enacted in 1996, social media startups needed protection from lawsuits so they could build robust hosting services. Today, the combined market cap of Facebook, Apple, Amazon, Netflix, and Alphabet is an astounding $7 trillion. These companies don’t need extraordinary protections; they need extraordinary accountability.
Twitter, like other social networking platforms, has been in the crosshairs of regulators and legislators for the past two years. The public debate around free expression and censorship ramped up when COVID skeptics, election deniers and conspiracy theorists hijacked the major platforms to further their agendas. As the level of acrimony, misinformation and threats of violence rose during the final months of the 2020 presidential campaign, Twitter and Facebook belatedly took steps to stanch the dangerous expression, including banning certain users.
That debate appears to be coming to a head, with New York’s attorney general launching an investigation into social media companies whose platforms were used to stream, promote or plan the terror attack in Buffalo that killed ten and wounded three. At the same time, the U.S. Court of Appeals for the 5th Circuit reinstated a Texas law, HB 20, that forbids social media websites with more than 50 million monthly U.S.-based users from limiting access to posts made by Texans on the basis of “viewpoint.”
The Texas law was temporarily blocked by the U.S. Supreme Court on May 31, but it will be reviewed once more by the 5th Circuit. If that court upholds the law, it will undo important safety mechanisms that are part of the present system. Despite the high incidence of problematic posts, social media companies do a considerable amount of content moderation. Facebook reports that from October to December 2021, it took action against terrorism content 7.7 million times, bullying and harassment 8.2 million times, and child sexual exploitation material 19.8 million times.
Spam, cyberbullying or posts advocating violence may not meet the legal test for incitement, but that doesn’t mean they should ever be tolerated or facilitated. Social media platforms like Twitter and Facebook are uniquely positioned to put an end to cyberbullying and hate speech, yet they continue to be protected by a cyberwall that no longer serves any valid purpose.
When he blocked the Texas law that was upheld on appeal, U.S. District Judge Robert Pittman wrote, “HB 20 prohibits virtually all content moderation, the very tool that social media platforms employ to make their platforms safe, useful, and enjoyable for users.”
In contrast to the 5th Circuit’s decision, on May 23 a panel for the 11th Circuit Court of Appeals unanimously upheld a Florida judge’s injunction blocking application of SB 7072, which would have restricted content moderation by social media companies. The panel held that the law violated the companies’ First Amendment rights, but it let stand part of the law that requires them to disclose their content moderation criteria.
In the 1940s, the Supreme Court ruled that a state could not, in keeping with the First and Fourteenth Amendments, impose criminal punishment on a person distributing religious literature on the sidewalk of a “company town” (Marsh v. Alabama (326 U.S. 501 (1946)). Twitter, Facebook and other social media companies are not “company towns;” they are private businesses free to decide with whom they do business. They can choose which voices to amplify and which to suppress.
At the same time, acts that would be criminal if done in person – assault, battery, harassment – should be equally sanctioned if they are done online. It is already unlawful to post child pornography or content that infringes on others’ copyrights; cyberbullying should be a no-brainer. In a 2020 communication to Congress, the Department of Justice outlined proposed changes to Section 230, including immunity carve-outs for certain malicious content. Among the carve-outs was one for “cyberstalking.”
It should not be a stretch to add a carve-out for cyberbullying, which lies along the cyberstalking spectrum. Until that happens, however, it will be up to the Elon Musks of the world to draw the line in the sand, and the likelihood of that happening is slim to none. Social media companies won’t take action against cyberbullying until their immunity shield is pierced.