Secondary School Essay Winner 2022/23
Olivia van Buttingha Wichers
Should private companies, such as social media platforms, be made to
respect the right of freedom of expression?
Early in the digital age, cyberlibertarian John Perry Barlow declared that the internet would become 'a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.' However, as the internet has evolved, hate speech, bullying and harassment on social media has meant that the deleting of posts and the suspension of accounts is an all too common occurrence.
Looking at it from the angle of the private company, this is perfectly acceptable. Private companies, unlike public companies, are free to develop their guidelines according to their own mission and values. Users sign up to the terms and conditions when accessing the platform. And accordingly, the company has the rights to remove anything they don’t want on their platform – and this is not limited to speech that is directed towards hate or violence.But when a platform is so pervasive – think Facebook – that there is no meaningful alternative, then one can argue that the guidelines should evolve to ensure
compliance with human rights. These guidelines should prevent speech that threaten, or insult groups based on colour, race, religion, national orientation or disability (hate speech). And not only in the country where the company is established – but everywhere.
Arguably, President Trump’s suspension from Instagram and Facebook is a good example of where a company’s guidelines justified its action. In two of his posts about the capital riots, Trump praised and supported people involved in a continuing riot where individuals were injured and died, lawmakers were at serious risk of harm, and a key democratic process was disrupted. Taking this into account and the continuing threat of violence and disruption in Washington DC, the Oversight Board deemed Facebook’s actions to temporarily suspend President Trump, necessary and proportionate. They did so under a
justified limitation of the freedom of expression. His enticement of violence had violated both the company's policy and the human rights policy.
Yet, this does however go fundamentally against the spirit of freedom of expression. Is this acceptable? How does a company determine what qualifies as hateful or damaging? Or judge when an opinion becomes a targeted attack on an individual or group?
Companies that try to do this, don’t always get it right. Referencing Facebook again, their guidelines on nudity have triggered violations of cultural expression highlighting that what’s acceptable in one culture is not tolerated in another. And their ‘real name’ policy – which aims to ensure you always know exactly who you are connecting with, violates the right to privacy for people who rely on anonymity for pseudonyms to express themselves.
Nevertheless, I believe that private companies should be made to respect the right of freedom of expression, albeit within clearly defined objective guidelines about what is acceptable and what is not. Striking the balance is the challenge. What is considered admissible for free speech is often difficult to establish, making it hard for private companies to objectively draw a line on what is acceptable to all. In English law, it’s common practice to ask what the man on the Clapham omnibus would say. He represents
the view of the hypothetical, ordinary and reasonable person. Yet, what might work in the UK will not necessarily work elsewhere because national legal and regulatory frameworks differ between countries and are not always consistent with international standards.
I believe that international reform, global standards, and robust processes can help. Today, the rules of social media companies are ambiguous, convoluted, and obscure. Facebook’s community standards are famously hard to find and consist of thousands of words of fine print. This makes it unlikely for users to understand, follow the regulations stipulated and apply them correctly. Given that human right law requires that speech restrictions are precise and accessible to all who live under them, social media rules and regulations of use should clearly be improved. For this I propose that there should be an external source of standards globally. A single body of transparent standards for private companies to adhere to. And rather than pages of text that you can skip over with the tick of a box, individuals should be made to watch a short impactful video explaining what is and is not allowed and why. And that the breaking of these rules can lead to suspension and or a ban. Only then would access to a social media platform be allowed. There should also be a robust process for people to understand what rule they may or may not be
violating. They should understand how decisions are made, what the rules are and what the basis of adjudication is in any individual instance.
To protect freedom of expression, it is also vital to try other interventions to persuade people not to post harmful content in the first place so that there are no more incidents like what happened to Molly Russell. Molly died in 2017 from an act of self-harm while suffering from depression and the negative effects of on-line graphic content selected and provided to her on the basis of algorithms. Yet, that content was posted by someone initially. If there was more rigorous screening of identities and people were accountable for their content – this would not have happened. Facebook’s real name policy strives to
achieve this, but it requires very little, if any validation that you are who you say you are.
In conclusion I firmly believe private companies such as social media platforms should be made to respect the right of freedom of speech. It is a fundamental right, it promotes tolerance, is part of individual autonomy, and an essential part of a well-run democracy. Moreover, it is enshrined in Article 19 of the Universal Human Rights Declaration. However, companies should be clear and unambiguous about where the limitations lie because an unregulated system that can harm and damage is entirely unacceptable. By setting clear, authoritative, and widely endorsed parameters social media platforms could work towards creating a safer online environment.