JF Law | Junior Editorial Board
Download a PDF of this article here.
Social media offers astonishing opportunities for self-expression. In its permeation of modern life, it has come to revolutionise and redefine the means by which individuals may communicate with one another. Anyone with access to the Internet now has the opportunity to disseminate an opinion with greater immediacy and global reach than has ever been feasible before. Social media has thus moulded the most accessible platform for free speech that the world has seen. The creation of this universal discussion forum has elicited a torrent of conversation, involving anyone with the desire to speak with conviction to social media’s listening ear. Indeed “letters to the editor” may eventually be relegated to the realm of pre-Internet necessities – those desirous to express their opinion to the masses need no longer ponder whether that opinion will be deemed worthy of publication. They may opt instead to exploit the self-publication offered by social media, armed with their right to “free speech” as a bulwark against controversy.
As a pivotal vehicle of modern-day expression, the restraints that social media websites place – or refrain from placing – upon this freedom of speech inevitably results in contention. A corollary of the globalised nature of social media is that it is now easier for content to cross cultural and geographical boarders. This has sparked issues pertaining to the jurisdictional legality of the cross-border sharing of certain information. For example, states with stringent laws regarding hate speech may adopt a different approach to offensive social media content than would countries with a more liberally legislated “freedom of expression.” The approach taken by social media giants to accommodate such variations has been to define a global policy for the regulation of user-generated content, which applies to all jurisdictions in which the website is accessed, whilst simultaneously nuancing the availability of content in each jurisdiction in order to comply with domestic legislation. This is exemplified by Facebook’s attitude to blasphemy – blasphemous material does not violate Facebook’s self-governing “Community Standards,” however it is restricted in countries wherein it would violate local legislation. In the Irish context, although the offence of blasphemy was recently given statutory footing in s 36 of the Defamation Act, 2009, it is unclear how stringently Facebook adheres to this statutory prohibition. In particular, this author can find no reported cases of blasphemous material being removed from social media in Ireland. It is respectfully submitted that where domestic law is mute or offers only vague guidance on the nuances of such distasteful content, an onus is placed upon social media operators to strike a balance between their venerated accommodation of free speech, and the regulation of undesirable content.
I. Elonis v United States
The limitations of free speech on social media were recently examined in the US Supreme Court case of Elonis v United States. Following separation from his wife, Anthony Elonis, under the pseudonym “Tone Dougie,” began to post misogynistic and chillingly violent “rap lyrics” on Facebook, his ominous tirades pertaining to his wife, co-workers, a pre-school class, and an FBI agent. When his wife succeeded in obtaining a protection-from-abuse order against him, Elonis responded with a Facebook post that read “[f]old up your [protection-from-abuse order] and put it in your pocket/Is it thick enough to stop a bullet?” He also elucidated his desire to “make a name” for himself, in his statement that there were “[e]nough elementary schools in a ten mile radius to initiate the most heinous school shooting ever imagined/ And hell hath no fury like a crazy man in a Kindergarten class.” Subsequent to this post, an FBI agent visited Elonis at his home. In response to her visit, Elonis took to Facebook once more, composing a post entitled “Little Agent Lady” in which he expressed the self-restraint he had required not to “[p]ull (his) knife, flick (his) wrist, and slit her throat.” These vehement posts were peppered with references to the legality of his actions – in the home of the First Amendment, Elonis steadfastly cited his freedom of expression. He included “disclaimers,” that his work was “‘fictitious,’ with no intentional “resemblance to real persons,’” and also reasoned to a concerned Facebook commenter that his writing was merely “therapeutic.”
Elonis was initially sentenced to forty-four months of imprisonment and three years of supervised release, on the premise that his posts constituted a “true threat.” It was held that his intention was irrelevant, and that the threatening nature of his posts was determined on the basis that “a reasonable person would foresee that the statement would be interpreted… as a serious expression of an intention to inflict bodily injury or take the life of an individual.” The Supreme Court, however, overturned this conviction. It held that the “reasonable person” test for negligence was not sufficient to secure a criminal conviction, and that “awareness of some wrongdoing” was necessary. This requisite mens rea was not adequately proven, and Elonis was acquitted. The Court deemed it unnecessary to adjudicate on any First Amendment contentions in this instance.
Justice Thomas summarised the difficulties posed by this decision when he stated in his dissent that it “throws everyone from appellate judges to everyday Facebook users into a state of uncertainty.” US law is now unclear regarding what constitutes the mental requisite of a threatening statement, in order to impose criminal sanctions. This is turn means that Facebook has only a very blurred precedent to follow in its determination of which potentially threatening posts should be removed, and which may remain. The website’s “Community Standards” asserts that it “remove(s) content and may escalate to law enforcement when (it) perceive(s) a genuine risk of physical harm, or a direct threat to public safety. You may not credibly threaten others, or organize acts of real-world violence.” The Elonis case has arguably set an extremely high standard for this perception of credibility – if the intensity of Elonis’ threatening posts was not sufficiently credible to incriminate him, one recoils at the thought of what level of credibility must be reached to do so. Considering the fact that Facebook prides itself on “defending the principles of freedom of self-expression,” and bearing in mind the reverence with which many US citizens view their First Amendment rights, it is unlikely that Facebook will want to adopt a position against threatening posts that is more restrictive than that manifested by the law. The decision has possibly created a dangerous platform that “would grant a license to anyone who is clever enough to dress up a real threat in the guise of rap lyrics, a parody, or something similar.”
The implication of the Elonis decision for Facebook may be to give the company free reign within the United States, in determining whether online content as violent and disturbing as that posted by Anthony Elonis is permissible or not. The Court’s declination to pass judgment on the First Amendment issue has left the matter entirely up in the air – and has left it up to Facebook to decide in instances of legal dubiousness what should or should not be perceived as threatening. Therefore, Facebook essentially may establish its own online boundaries for free speech in the US, because the domestic law has failed to set a concrete precedent.
A recent arrangement between Germany and a number of social media outlets has enunciated the country’s reluctance to allow that the same discretionary power potentially be imparted upon social media sites within its jurisdiction. In an attempt to combat hate speech against refugees, German authorities reached a legal agreement with the social media giants Facebook, Google and Twitter. This agreement explicitly necessitates the application of domestic law, rather than the usage policies of the outlets, in the review and removal of posts containing hate speech. Critics of the policy have dubbed it an “enforcement of political correctness,” lamenting this new restriction on their freedom of expression. Proponents of the arrangement, however, are arguably reflecting on the country’s history of minority suppression, deeming the policy a welcome prevention of the hateful crowd mentality that could more easily be fostered in the current digital era than it was in the past.
Germany’s position regarding the interaction of social media policies and domestic law is not a novel one. In 2012, Twitter complied with requests from the German authorities to shut down the account of a Neo-Nazi group called Besseres-Hanover, the members of which were charged with inciting hatred and forming a criminal organization. However, in an interesting affirmation of its stance on free speech, Twitter did not shut down the account entirely, merely rendering it inaccessible to German IP addresses. The account is still accessible online to those outside of Germany, on the basis that German police did not have the jurisdiction to ban the account overseas. In a tweet clarifying the incident, Twitter’s chief lawyer confirmed the company’s desire to act as a liberal platform for free speech, whilst simultaneously complying with the specific jurisdictional requirements of each country in which it operates. Twitter’s “Abusive Behaviour Policy” asserts that “offensive content is tolerated as long as it does not violate the Twitter Rules or Terms of Service.” If an account is not deemed to violate such terms, it is therefore accessible in jurisdictions wherein Twitter has not been made explicitly aware of its illegality. One commentator noted, however, that “anyone with a little knowledge can get around (the blocking of the account) with a proxy server.” Therefore in its adamancy that free-speech must be protected as stringently as possible, social media may continue to facilitate the access to and dissemination of material that is deemed illegal in certain jurisdictions, within those jurisdictions.
The discretionary power often maintained by social media in the determination of what constitutes freedom of speech has been brought into focus of late, with reference to the controversial campaign policies of U.S. Presidential hopeful Donald Trump. Trump recently posted a video to his Facebook page in which he called for a “total and complete shutdown of Muslims entering the United States.” Many users have demanded the removal of this video, arguing that his discriminatory comments constitute “hate speech.” The issue is of a particularly contentious nature in Britain, wherein the Public Order Act 1986 (amended by the Religious and Racial Hatred Act 2006) deems it an offence to disseminate material that is “threatening, abusive or insulting,” and is likely to stir up racial hatred. It is on this premise that a petition to ban Donald Trump from entering the UK has received over 570,000 signatures, and has been the subject of parliamentary debate.
Facebook, however, seems to have disregarded such legislation and indeed its own usage policies in its maintenance that the controversial footage shall not be removed from the social media site. Facebook’s “Community Standards” declares its intention to remove any hate speech based on “race,” “ethnicity,” “national origin” and, “religious affiliation.” It’s internal guidelines provide even more clarification to moderators regarding what should be considered hate speech, in its specific prohibition of “calling for violence, exclusion, or segregation for a protected category,” “degrading generalizations,” and “dismissing an entire protected category.” Indeed, it has been shown that ordinary Facebook users who portray the same sentiments as Trump, using the same language, will have their comments removed on the basis that they violate the Community Standards. A spokesperson for the social media site has justified this contradictory practice by maintaining that “[w]hen we review reports of content that may violate our policies, we take context into consideration. That context can include the value of political discourse.” It would therefore appear that in the absence of any explicit governmental request to comply with jurisdictional legislation, Facebook has seen fit to bend its own rules in order to accommodate the dissemination of what may constitute hate speech in the United Kingdom.
Advocates of a liberal approach to the freedom of speech may rightly contest that Facebook’s facilitation of the potentially criminal material in this instance is in the interest of the public good. Citizens arguably have the right to be made aware of pivotal and contentious policies of an individual who is potentially the next leader of one of the world’s most influential nations. A similar contention was raised with the recent amendment of the “Twitter Rules,” which enunciated the social media site’s position on the portrayal of corpses. The delicate issue was raised in response to concerns over the rights of the family and the dignity of the deceased in such instances. Whilst the rules specifically cite the removal of images of death that are of a “gratuitous” nature, the amendment nevertheless prompts consideration of the harrowing and heartbreaking image of the young Syrian boy washed ashore on a Turkish beach, which became symbolic of the refugee crisis. The use of this graphic image was justified by the notion that “among the often glib words about the “ongoing migrant crisis,” it is all too easy to forget the reality of the desperate situation facing many refugees.” It is undeniable that the image in question is extremely distressing, however if social media decides to curb the dissemination of such content in an attempt to reduce offensive material, it may prevent such dreadful incidents from receiving the coverage and making the impact that they deserve. It is therefore conceivable that social media needs to push the boundaries of politically correct free speech and “tell people what they do not want to hear,” in order to stimulate their engagement in critical human rights issues and increase their awareness of global concerns.
The aforementioned examples have portrayed the possibility that, in instances where domestic law on free speech is not clearly outlined for social media – or indeed in instances where social media’s disregard for jurisdictional legislation is left legally unchallenged – corporations such as Facebook and Twitter may have the discretion to hugely influence the exercising of the right to free speech in the digital era. This has led to calls by some for the development of an “International Law of the Internet.” Such an argument is advanced on foot of Article 19(2) of the International Covenant on Civil and Political Rights. This provision is said to limit states’ abilities to inhibit the cross-border exchange of information, and offer an “important normative reorientation on individual rights for both domestic and international Internet governance debates.” Although such international regulation would most likely not be all-encompassing – some jurisdictions will inherently refrain from participating, whilst for others the sheer lack of Internet access would render the law inapplicable – the desirability of unifying the freedom of speech across borders is very much conceivable. The coalescence of this area of law could facilitate greater international contribution to issues of international significance, and would allow for broader participation in a “system of culture creation.” As the role of social media in becomes increasingly significant in the practice of self-expression, the need for concrete and cross-border regulation of such expression becomes increasingly necessary. It remains to be seen how the law will evolve to accommodate and regulate this role.
 Patrick Ford, “Freedom of Expression Through Technological Networks: Accessing the Internet as a Fundamental Human Right”(2014) 32(1) WILJ 142, at 143.
 Jack M. Balkin, “Digital Speech and Democratic Culture: A Theory of Freedom of Expression for the Information Society” (2004) 79(1) NYU L Rev 1, at 7.
 Julia Hörnle, Cross Border Internet Dispute Resolution (Cambridge University Press, 2009), at 2.
 See for example “Country-Withheld Content” <support.twitter.com/articles/20169222> (visited 21 March 2016); “Explaining our Community Standards and Approach to Government Requests” <newsroom.fb.com/news/2015/03/explaining-our-community-standards-and-approach-to-government-requests/> (visited 21 March 2016).
 “Explaining our Community Standards and Approach to Government Requests” <newsroom.fb.com/news/2015/03/explaining-our-community-standards-and-approach-to-government-requests/> (visited 21 March 2016).
 575 US (2015).
 575 US (2015), at 6.
 575 US (2015), at 4.
 575 US (2015), at 4-5.
 575 US (2015), at 5.
 US Constitution, Amendment 1: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.”
 575 US (2015), at 2.
 575 US (2015), at 7.
 575 US (2015), at 13.
 575 US (2015), at 16.
 575 US (2015), per Justice Thomas, dissenting, at 2.
 “Community Standards: Helping to Keep You Safe – Direct Threats” <www.facebook.com/ communitystandards> (visited 21 March 2016).
 “Controversial, Harmful and Hateful Speech on Facebook” <www.facebook.com/notes/facebook-safety/controversial-harmful-and-hateful-speech-on-facebook/574430655911054/> (visited 21st March 2016).
 A recent study indicated that 74% of Americans support the First Amendment and the freedoms that it affords. See <www.newseuminstitute.org/wp-content/uploads/2015/07/FAC_SOFA15_report.pdf> (visited 21 March 2016).
 575 US (2015), per Justice Alito, at 6.
 Victor Luckerson, Facebook, Google Agree to Curb Hate Speech in Germany <time.com/4150296/facebook-google-hate-speech-germany/> (visited 21 March 2016).
 Anthony Faiola, Germany Springs to Action Over Hate Speech Against Migrants www.washingtonpost.com/world/europe/germany-springs-to-action-over hate-speech-against-migrants/2016/01/06/6031218e-b315-11e5-8abc-d09392edc612_story.html> (visited 21 March 2016).
 Laura Scaife, Handbook of Social Media and the Law (Informa Law from Routledge, 2015), at 159.
 “Abusive Behaviour Policy” <support.twitter.com/articles/20169997> (visited 21 March 2016).
 Nicholas Kulish, Twitter Blocks Germans’ Access to Neo-Nazi Group <http://www.nytimes.com/2012/10/19/world/europe/twitter-blocks-access-to-neo-nazi-group-in-germany.html> (visited 21 March 2016).
 See <www.facebook.com/DonaldTrump/videos/10156387656245725/> (visited 21st March 2016).
 Doug Bolton, This is why Facebook isn’t removing Donald Trump’s “hate speech” from the site
<www.independent.co.uk/life-style/gadgets-and-tech/news/donald-trump-muslim-hate-speech-facebook-a6774676.html> (visited 21 March 2016).
 Public Order Act, 1986, ss 17, 18.
 See <petition.parliament.uk/petitions/114003> (visited 21st March 2016).
 See <hansard.digiminster.com/commons/2016-01-18/debates/1601186000001/DonaldTrump> for the transcript of this debate (visited 21st March 2016).
 “Encouraging Respectful Behaviour: Hate Speech” <www.facebook.com/communitystandards> (visited 21 March 2016).
 Sarah Kessler, Donald Trump Can Post Hate Speech to Facebook but You Can’t <www.fastcompany.com/3054592/donald-trump-can-post-hate-speech-to-facebook-but-you-cant> (visited 21st March 2016).
 “The Twitter Rules: Content Boundaries and Use of Twitter – Graphic Content” <support.twitter.com/articles/18311https://support.twitter.com/articles/18311> (visited 21st March 2016).
 Emily Bell, Twitter Tackles the Free Speech Conundrum <www.theguardian.com/media/2016/jan/10/twitter-free-speech-rules-hostile-behaviour> (visited 21st March 2016).
 Adam Withnall, If these extraordinarily powerful images of a dead Syrian child washed up on a beach don’t change Europe’s attitude to refugees, what will? <www.independent.co.uk/news/
world/europe/if-these-extraordinarily-powerful-images-of-a-dead-syrian-child-washed-up-on-a-beach-don-t-change-10482757.html> (visited 21 March 2016).
 George Orwell, “The Freedom of the Press” Times Literary Supplement (1972) <orwell.ru/library/novels/Animal_Farm/english/efp_go> (visited 21 March 2016).
 Molly Land, “Toward an International Law of the Internet” (2013) 54(2) Harv Int’l LJ 393.
 Ibid., at 418.
 Some countries engage in their own stringent jurisdictional gateway filtering and would likely refrain from engaging in any such agreement. One such example is China. Additionally, China is not a party to the ICCPR, which would provide further grounds for its avoidance of any such arrangement. See Molly Land, “Toward an International Law of the Internet” 54(2) Harv Int’l LJ (2013) 393, at 441.
 See Wolfgang Benedek and Matthias Kettemann, Freedom of Expression and the Internet (Council of Europe Publishing, 2013), at 111-115, for discussion of whether Internet access is a human right.
 Jack M. Balkin, “Digital Speech and Democratic Culture: A Theory of Freedom of Expression for the Information Society” (2004) 79(1) NYU L Rev 1, at 6.
Picture credit: By k_donovan11 – Congressional Quote, CC BY 2.0, https://commons.wikimedia.org/w/index.php?curid=4526500.