UK’s Online Safety Bill falls short on protecting speech and tackling harms, warns committee
Another UK parliamentary committee has weighed in on the government’s controversial plan to regulate Internet content with a broadbrush focus on ‘safety’.
The Digital, Culture, Media and Sport (DCMS) Committee, warned in detailed report today that it has “urgent concerns” the draft legislation “neither adequately protects freedom of expression nor is clear and robust enough to tackle the various types of illegal and harmful content on user-to-user and search services”.
Among the committee’s myriad worries are how fuzzily the bill defines different types of harms, such as illegal content — and designations of harms — with MPs calling out the government’s failure to include more detail in the bill itself, making it harder to judge impact as key components (like Codes of Practice) will follow via secondary legislation so aren’t yet on the table.
That general vagueness, combined with the complexities related to the choice for a “duty of care” approach — which the report notes in fact breaks down into several specific duties (vis-a-vis illegal content; content that poses a risk to children; and, for a subset of high risk P2P services, content that poses a risk to adults) — means the proposed framework may not be able to achieve the sought for “comprehensive safety regime”, in the committee’s view.
The bill also creates risks for freedom of expression, per the committee — which has recommended the government incorporates a balancing test for the regulator, Ofcom, to assess whether platforms have “duly balanced their freedom of expression obligations with their decision making”.
The risk of platforms responding to sudden, ill-defined liability around broad swathes of content by over-removing speech — leading to a chilling impact on freedom of expression in the UK — is one of the many criticisms raised against the bill which the committee appears to be picking up on.
It suggests the government reframes definitions of harmful content and relevant safety duties to bring the bill in line with international human rights law — in order to try to safeguard against the risk of over-removal by providing “minimum standards against which a provider’s actions, systems and processes to tackle harm, including automated or algorithmic content moderation, should be judged”.
Even on child safety — a core issue UK ministers have repeatedly pinned to the legislation — the committee flags “weaknesses” in the bill that they assert mean the proposed regime “does not map adequately onto the reality of the problem”.
They have called for the government to go further in this area, urging the bill to be expanded to cover “technically legal” practices, such as breadcrumbing (aka “where perpetrators deliberately subvert the thresholds of criminal activity and for content removal by a service provide”) — citing witness testimony which suggests the practice, while not in fact illegal, “nonetheless forms part of the sequence for online CSEA [child sexual exploitation and abuse]”.
Similarly, the committee suggests the bill needs to go further to protect women and girls against types of online violence and abuse specifically directed at them (such as “tech-enabled ‘nudifying’ of women and deepfake pornography”).
On Ofcom’s powers of investigation of platforms, the committee argues they need to be further strengthened — urging amendments to give the regulator the power to “conduct confidential auditing or vetting of a service’s systems to assess the operation and outputs in practice”; and to “request generic information about how ‘content is disseminated by means of a service'”, with MPs further suggesting the bill should provide more specific detail about the types of data Ofcom can request from platforms (presumably to avoid the risk of platforms seeking to evade effective oversight).
However — on enforcement — the committee has concerns in the other direction and is worried over a lack of clarity over how Ofcom’s (set to be) very substantial powers may be used against platforms.
It has recommended a series of tweaks, such as making clear these powers only apply to in-scope services.
MPs are also calling for a redrafting of the use of so-called “technology notices” — which will enable the regulator to mandate the use of new technology (following “persistent and prevalent” failings of the duty of care) — saying the scope and application of this power should be “more tightly” defined, and more practical information provided on the actions required to bring providers into compliance, as well as more detail on how Ofcom will test whether the use of such power is proportionate.
Here the committee flags issues of potential business disruption. It also suggests the government take time to evaluate whether these powers are “appropriately future-proofed given the advent of technology like VPNs and DNS over HTTPs”.
Other recommendations in the report include a call for the bill to contain more clarity on the subject of redress and judicial review.
The committee also warns against the government creating a dedicated joint committee to oversee online safety and digital regulation, arguing that parliamentary scrutiny is “best serviced by the existing, independent, cross-party select committees and evidenced by the work we have done and will continue to do in this area”.
It remains to be seen how much notice the government takes of the committee’s recommendations. Although the secretary of state for digital, Nadine Dorries, has previously suggested she is open to taking on board parliamentary feedback to the sweeping package of legislation.
The report, by the DCMS Committee, follows earlier recommendations — in December — by a parliamentary joint committee focused on scrutinizing the bill which also warned that the draft legislation risked falling short of the government’s safety aims.
The government published the draft Online Safety bill back in May 2021 — setting out a long-trailed plan to impose a duty of care on Internet platforms with the aim of protecting users from a swathe of harms, whether related to (already illegal) content such as terrorist propaganda, child sexual abuse material and hate speech, through more broadly problematic but not necessarily illegal content such as bullying or content promoting eating disorders or suicide (which may create disproportionate risks for younger users of social media platforms).
Speaking to the joint committee in November, Dorries predicted the legislation will usher in a systemic change to Internet culture — telling MPs and peers it will create “huge, huge” change to how Internet platforms operate.
The bill, which is still making its way through parliament, targets a broad range of Internet platforms and envisages enforcing safety-focused governance standards via regulated Codes of Conduct, overseen by Ofcom in an expanded role — including with incoming powers to issue substantial penalties for breaches.
The sweeping scope of the regulation — the intent for the law to target not just illegal content spreading online but stuff that falls into more of a grey area where restrictions risk impinging on freedom of expression and speech — mean the proposal has attracted huge criticism from civil liberties and digital rights groups, as well as from businesses concerned about liability and the compliance burden.
In parallel, the government has been stepping up attacks on platforms’ use of end-to-end encryption — deploying rhetoric that seeks to imply robust security is a barrier to catching pedophiles (see, for example, the government’s recently unveiled NoPlaceToHide PR to try to turn the public against E2E encryption). So critics are also concerned that ministers are trying to subvert Internet security and privacy by recasting good practices as barriers to a goal imposing ‘child safety’ through mass digital surveillance.
On that front, in recent months, the Home Office has also been splashing a little taxpayer cash to try to foster the development of technologies which could be applied to E2EE systems to scan for child sexual abuse material — which it claims could offer a middle ground between robust security and law enforcement’s data access requirements.
Critics of the bill already argue that using a trumped up claim of child ‘protection’ as a populist lever to push for the removal of the strongest security and privacy protections from all Internet users — simultaneously encouraging a cottage industry of commercial providers to spring up and tout ‘child protection’ surveillance services for sale — is a lot closer to gaslighting than safeguarding, however.
Zooming back out, there is also plenty of concern over the risk of the UK over regulating its digital economy.
And of the bill becoming a parliamentary “hobby horse” for every type of online grievance, as one former minister of state put it — with the potential for complex and poorly defined content regulation to end up as a disproportionate burden on UK startups vs tech giants like Facebook whose self-serving algorithms and content moderation fuelled calls for Internet regulation in the first place, as well as being hugely harmful to UK Internet users’ human rights.
from https://techcrunch.com/2022/01/24/dcms-committee-report-on-online-safety-bill/
No comments: