This article provides a good overview of the laws governing free expression in the UK, with Article 10 of the Human Rights Act 1998, the Public Order Act 1986, and the recent Online Safety Act 2023.
It explains how the provisions of these three laws have become central to the hate speech debate in the UK.
For a hate speech prosecution to succeed under the Public Order Act, the prosecution must prove that the language used was threatening, abusive, or insulting, and that there was an intention to stir up hatred.
The high threshold for intent makes these provisions notoriously difficult to enforce, a point of constant critique from advocacy groups.
However, the Public Order Act also contains the more frequently used Section 5, which criminalises threatening or abusive words or behaviour likely to cause harassment, alarm, or distress.
This provision has a much lower legal bar and has been used in a wide array of contexts, from street preachers to online trolls. Critics from one side argue that it creates a chilling effect on speech, potentially punishing merely offensive expression. But campaigners argue that its inconsistent application fails to offer adequate protection for those routinely targeted by vitriolic abuse.
The case of DPP v Collins (2006), involving racist remarks made over the phone, confirmed that the impact on a targeted group is relevant, yet its application remains contentious.
The question of how UK law defines hate speech is not straightforward; it’s context-dependent and spread across multiple statutes. There is no single, consolidated “hate speech law,” but rather a collection of offences that address incitement to hatred and other forms of abusive communication.
This fragmented approach creates legal grey areas and inconsistencies, which can be exploited by those seeking to push hateful narratives to the very edge of legality. They can operate in the ambiguous space between what is grossly offensive and what is criminally hateful. The lack of a clear definition often leaves victims without a clear path to recourse.
This legal ambiguity is magnified exponentially by the internet, a challenge the new Online Safety Act 2023 attempts to address. The Act represents the most significant legislative effort to regulate digital platforms and hate speech in a generation.
Its core aim is to impose a duty of care on tech companies, forcing them to take more responsibility for the content hosted on their sites. This includes illegal content, such as incitement to violence and harassment, which platforms must remove proactively. It is a direct legislative response to years of platform inaction.
The Act introduces specific duties for the largest platforms to address what is termed “legal but harmful” content (material that doesn’t meet the criminal threshold but can still cause significant harm, such as content promoting self-harm or eating disorders) accessible by adults.
This category could include certain forms of abuse or disinformation which might not meet the criminal threshold for hate speech but still cause significant harm.
However, the “legal but harmful” concept has been the subject of intense debate throughout the Act’s passage. Free speech advocates warned it could lead to corporations becoming overzealous censors of controversial or dissenting opinions, effectively privatising speech regulation. The Act is enforced by Ofcom.
Existing legal frameworks used to challenge hate speech reflect a society grappling with itself. They are the product of political compromise and ongoing social negotiation about the kind of public square we want.
These laws attempt to draw lines, but those lines are constantly being redrawn by new technologies and shifting political winds.
The legal code provides a set of tools, but it doesn’t resolve the underlying ideological battle. The struggle over speech ethics is fought not just in courtrooms, but in comment sections, on news channels, and in parliament.
This battle is particularly acute for hate speech and marginalised groups.
Read the full article on Rock and Art.
