Donald Trump Jr. recently proclaimed (on Twitter) that freedom of speech was not only dead, but it was ‘controlled by leftist overlords’ after the platform decided to lock his father’s account in the wake of the Capitol Hill invasion.
Facebook took it further, deciding to remove anything that contained the phrase ‘Stop the Steal’; both acts provoking a fallacious backlash not dissimilar to 2020’s most confusing faux legal tangle: ‘Does wearing a mask infringe on my human rights?’
Since the riot of January 6 (and consequential actions by Big Tech), some are calling for a repeal of Section 230 in the US Constitution, where a tech platform has immunity from civil liability regarding the publishing, moderation and removal of third-party content.
Others are launching litigation, such as the recent filing by Parler against Amazon regarding the right-wing platform’s removal from the Amazon servers that hosted it.
Aided and abetted by local politicians, the question feeding outrage across the globe is simply, ‘Does a platform have the right to censor their users?’
Personally, I don’t view being banned by Twitter or Facebook (nor the removal of Parler), as unwarranted censorship. A great allegory came from David Andreatta this week writing in the Rochester City Paper:
“(Social media) platforms are no different than a bar, where people convene, exchange ideas, laugh, and in the process, sometimes lie. Bar owners, like social media companies, tend to have a high tolerance for nonsense. But when a blowhard causes trouble in a bar, the bar owner has every right to kick him out, and for good, if need be.”
Enough tweeps liked my Twitter is a bar and Trump is a blowhard barfly analogy that I crafted it into a column. Nothing, though, is a priceless as the analogy of Twitter as a Christian bakery and Trump a gay wedding cake. I was sure to reference that one. https://t.co/PFIqp1S1fV
— David Andreatta (@david_andreatta) January 11, 2021
I believe Trump’s ban (and what has followed) is an issue of perception.
It may not be a political move, but the inconsistencies of policy and enforcement by tech players make it seem that way. It’s also worth noting that we find it hard to trust Silicon Valley, be it because of data breaches, allegedly subverting elections, or the fact that outrage has been good for business.
As Queensland University’s Katherine Gelber, Professor of Politics and Public Policy, notes, “The Big Tech companies have staunchly resisted being asked to regulate speech, especially political speech, on their platforms. They have enjoyed the profits of their business model, while specific types of users – typically the marginalised – have borne the costs.”
However, this issue, at its root, is about rampant misinformation and disregarded facts. The boot may feel like it is coming down, but finally these companies seem to be examining how much harm can be inflicted as a result of misinformation.
The visceral nature of the Capitol Hill riot nakedly proved that user (and indeed, community) regulation was obsolete, and how easily false facts can lead to real violence.
Back in Trump World, his supporters are claiming that the ban is a wanton violation of free speech, a right enshrined in the First Amendment of the US Constitution.
Unfortunately for Donald’s patriots, the First Amendment does not apply to a private entity determining who can use their service, and how. As Professor Gelber points out: “There is also no free speech argument that guarantees any citizen the right to express their views on a specific platform.”
Hypocritically, these are the same people who want to make sure a bakery can refuse service to a LGBTQ couple, but are also claiming that Twitter can’t remove Trump on moral grounds. Ostensibly, they want to have their cake and eat it too.
Despite what his followers believe, Trump was not banned from Twitter and Facebook because the platforms believe he directed an unambiguous riot at the Capitol.
He is banned because his divisive, ongoing rhetoric and baseless claims are inciting fear and mistrust, resulting in violence. Bitterly, the problems continue, as the Capitol Hill riot changed nothing in the minds of those responsible.
Those who believe that the election was stolen, continue to believe it.
Similarly, with the rise of QAnon and anti-vaccination propaganda, the suggestion that individuals, via social media, who have purposefully and repeatedly shared anti-science propaganda while wheeling out discredited practitioners are simply ‘asking questions’. This is both shameful and dangerous.
The result is a large group in society who are rejecting COVID protocols (introduced to keep others in their community safe), while simultaneously presenting themselves as (literally) fighting on the people’s behalf.
They’re defending the right to believe disproven information, whatever the cost may be.
These platforms have the right, and duty, to remove damaging individuals. Likewise, they have the obligation to moderate accounts that spread, promote and incite hate speech and propaganda.
This is not censorship. There is a difference between facilitating critical debate and spreading misinformation, and/or engaging in mendacious acts. While I do not believe in censoring opinion, I do believe in halting the spread of disproven and fallacious claims.
So far, Silicon Valley has largely failed, with Joe Biden calling Facebook ‘totally irresponsible’ and Senator Ed Markey, a Democrat, saying the real issue is violence and hate speech, not anti-conservative bias. We know what Donald Trump thinks about that.
A small glimpse into the growing issue is a recent study that looked into collective efficacy and digital polarisation saw an increase of 46 per cent of online antisemitic incidents in the UK alone. And bear in mind Facebook only investigated 9 per cent of 366 child exploitation cases reported, according to the Tech Transparency Project in 2020.
While these platforms claim to increase standards around banning such content and associated users, it can be argued the fervour hasn’t been as severe as their approach to Trump.
If these platforms are going to take their obligation seriously and create a safe digital environment for users, then the rules need to be consistent.
Make no mistake, this is a defining moment. It comes with a huge learning curve, but if they manage to properly regulate toxic speech on their platforms, they have the opportunity to mitigate and rectify the real-world damage caused by digital echo chambers.
Alexandra Tselios is CEO of The Big Smoke Group and commentator on radio and TV