For too long, Twitter, Facebook and Google have skated over their legal and moral obligations, diminishing their role as disseminators of information and opinions by claiming they are not publishers like traditional media companies, simply platforms for third-party discussion or debate.
That has allowed them to offer protection to anonymous users, some of whom use social media to abuse and harass others, while largely being shielded themselves from the legal implications of that content. The recent High Court decision in the Dylan Voller case emphasised this, ruling that media organisations which posted their stories on Facebook were deemed to be the publishers of any third-party comments that appeared under their posts, and therefore responsible if those comments defamed another person.
Prime Minister Scott Morrison has announced a crackdown on social media trolls.Credit:Alex Ellinghausen
In that environment, we welcome Prime Minister Scott Morrison’s intention in drafting laws that attempt to clean up, in his words, the “wild west” of the internet. At the centre of his proposal is a process to force social media companies to reveal the identity of problematic anonymous users, exposing those individuals to defamation actions.
The Age has long argued that the likes of Facebook and Twitter are no longer virtual public noticeboards but publishers in the same space as traditional media. They control their platforms with a series of powerful but opaque algorithms and curate posts for profit. They have the financial and technical power to fact-check and moderate speech on their platforms and as such bear at least some responsibility for the content that appears there, even though it is posted by their users.
A recent survey conducted for The Age by research company Resolve Strategic indicated two-thirds of Australian adults back the idea of holding social media companies responsible for posts made on their platforms, while 70 per cent support unmasking anonymous trolls.
So, will the federal government’s planned law do what Mr Morrison claims for it? He has proposed a multi-stage mechanism that gives social media companies some protection from defamation actions if they have a local shopfront to hear complaints and agree to seek consent from anonymous trolls to reveal their identity. If a person does not give consent, complainants would be able to apply for a new Federal Court order forcing platforms to disclose identifying details. If the companies refuse, they would become liable to defend the action themselves as publishers of the offending comments. Mr Morrison has threatened to back early test cases with federal funds.
Legal experts and others have concerns. Professor David Rolph, a defamation expert at the University of Sydney, warns “the effect of the new powers … will be to increase defamation cases being brought in the Federal Court”. Legal affairs reporter Michaela Whitbourn reports of confusion in the legal fraternity over how the new law would improve on existing legislation, and suggests the proposal could actually benefit the social media giants “because they could not be held liable for defamatory posts where the person responsible for them is readily identifiable”. It’s unclear how it would affect accounts based in other jurisdictions.
Then there is a broader issue that some people have good reasons to post anonymously – including some who might otherwise be cowed into silence by the threat of defamation action. Some, like child sexual assault survivors, or whistleblowers revealing government corruption, have moral and in some cases legal reasons not to reveal their identities.
“Anonymity,” says Chris Cooper, executive director of the not-for-profit Reset Australia, “is an important tenet of a free and open internet that protects critics of the powerful, which can hold leaders accountable.”
We can’t help feeling that the government’s real intention in introducing these laws is not to take on dozens of defamation actions against social media trolls, but instead to encourage the social media companies themselves to act more decisively to remove gratuitously offensive and violent content as early as possible, under threat of a complex and messy legal regime if they do not. If this is the case, it raises the tantalising prospect that social media could be rendered a better regulated space and, if not entirely free from bigotry and hate, at least a slightly nicer place to be.
The Morning Edition newsletter is our guide to the day’s most important and interesting stories, analysis and insights. Sign up here.
Most Viewed in Politics
From our partners
Source: Read Full Article