Social media platforms like Twitter, Facebook, and YouTube have, for many, become essential platforms for consuming information, staying connected to the world at large, and providing a source of income. The rules that govern access to such networks profoundly impact users’ ability to remain fully integrated within modern society.
The terms of service of social media platforms, largely unaccountable to public oversight, are attempting to answer longstanding questions typically adjudicated by the Supreme Court; where does free speech end, and incitement to violence begin? And who gets to decide what information is acceptable for public consumption?
On January 6, 2021, an angry mob sympathetic to the President, enraged over election fraud claims, stormed the U.S. Capitol building. Twitter responded by deactivating the account of President Donald Trump for violations of terms of service. The violation? inciting a riot that claimed the lives of 5 people. The deactivations of social media accounts didn’t stop with that of the president’s.
The largest tech companies deactivated thousands of social media accounts and went so far as to take specific, smaller social networking platforms offline. The justification for their removal was to prevent further dissemination of misinformation that could potentially lead to more more violence.
Why Parler is disappearing from the internet
Parler - the conservative social media platform that brands itself as a "free speech" alternative to sites like Twitter…
While Parler bore the brunt of responsibility for providing a forum for the election fraud disinformation that inspired the rioters, as it turns out, it was Facebook and YouTube that were home to the majority of election fraud conspiracy-related content.
Parler backlash overshadows Facebook's major role in fueling Capitol riot, watchdog groups say
The far-right social media platform Parler has shouldered much of the blame for last week's Capitol riot - and may…
While many right-wing affiliated accounts and groups were removed from Facebook, YouTube, and Twitter, the crackdown didn’t stop there, as other groups of differing political ideologies also found their accounts and groups deactivated or removed.
If the very same tech companies cracking down on misinformation provided a platform for that misinformation, then how will they decide which content is or isn’t allowed in the future? What role would an entity, unaccountable to public oversight, have in deciding which speech we as the public are allowed to consume?
Given that social media companies are private institutions, no citizen, necessarily, has the “right” to use them. The institution, therefore, has the right to refuse service to anyone it wishes, and this shouldn’t be considered a violation of free speech.