The Varieties of Counterspeech and Censorship on Social Media
The year 2020 was without a doubt a remarkable and unprecedented one, on many accounts and for many reasons. Among other reasons, it was a year in which the major social media platforms extensively experimented with the adoption of a variety of new tools and practices to address grave problems resulting from harmful speech on their platforms — notably, the vast amounts of misinformation associated with the COVID-19 pandemic and with the 2020 presidential election and its aftermath. By and large — consistent with First Amendment values of combatting bad speech with good speech — the platforms sought to respond to harmful online speech by resorting to different types of flagging, fact-checking, labeling, and other forms of counterspeech. Only when confronting the most egregiously harmful types of speech did the major platforms implement policies of censorship or removal — or the most extreme response of deplatforming speakers entirely. In this Article, I examine the major social media platforms’ experimentation with a variety of approaches to address the problems of political and election-related misinformation on their platforms — and the extent to which these approaches are consistent with First Amendment values. In particular, I examine what the major social media platforms have done and are doing to facilitate, develop, and enhance counterspeech mechanisms on their platforms in the context of major elections, how closely these efforts align with First Amendment values, and measures that the platforms are taking, and should be taking, to combat the problems posed by filter bubbles in the context of the microtargeting of political advertisements.