Three of New Zealand’s largest telecommunications providers have issued an open letter to social media giants Facebook, Twitter, and Google.
Questions around the ease uploading and sharing hate-filled and violent content came to the forefront after a white supremacist live-streamed himself opening fire on a Christchurch mosque on Facebook Live last week.
Vodafone NZ chief executive Jason Paris, Spark managing director Simon Moutter, and 2degrees chief executive Stewart Sherriff signed the letter asking the platforms to be a part of an urgent discussion at an industry and government level on a solution to the lack of control over content being uploaded and shared on social media platforms.
The letter was titled, A call from the companies providing internet access for the great majority of New Zealanders, to the companies with the greatest influence over social media content.
It was addressed to Facebook CEO Mark Zuckerberg, Twitter CEO Jack Dorsey, and Google CEO Sundar Pichai.
You may be aware that on the afternoon of Friday, March 15, three of New Zealand’s largest broadband providers, Vodafone NZ, Spark and 2degrees, took the unprecedented step to jointly identify and suspend access to web sites that were hosting video footage taken by the gunman related to the horrific terrorist incident in Christchurch.
As key industry players, we believed this extraordinary step was the right thing to do in such extreme and tragic circumstances. Other New Zealand broadband providers have also taken steps to restrict the availability of this content, although they may be taking a different approach technically.
We also accept it is impossible as internet service providers to completely prevent access to this material. But hopefully, we have made it more difficult for this content to be viewed and shared - reducing the risk our customers may inadvertently be exposed to it and limiting the publicity the gunman was clearly seeking.
We acknowledge that in some circumstances access to legitimate content may have been prevented and that this raises questions about censorship. For that, we apologise to our customers. This is all the more reason why an urgent and broader discussion is required.
Internet service providers are the ambulance at the bottom of the cliff, with blunt tools involving the blocking of sites after the fact. The greatest challenge is how to prevent this sort of material being uploaded and shared on social media platforms and forums.
We call on Facebook, Twitter and Google, whose platforms carry so much content, to be a part of an urgent discussion at an industry and New Zealand Government level on an enduring solution to this issue.
We appreciate this is a global issue, however, the discussion must start somewhere. We must find the right balance between internet freedom and the need to protect New Zealanders, especially the young and vulnerable, from harmful content.
Social media companies and hosting platforms that enable the sharing of user-generated content with the public have a legal duty of care to protect their users and wider society by preventing the uploading and sharing of content such as this video.
Although we recognise the speed with which social network companies sought to remove Friday’s video once they were made aware of it, this was still a response to material that was rapidly spreading globally and should never have been made available online.
We believe society has the right to expect companies such as yours to take more responsibility for the content on their platforms.
Content sharing platforms have a duty of care to proactively monitor for harmful content, act expeditiously to remove content which is flagged to them as illegal and ensure that such material – once identified – cannot be re-uploaded.
Technology can be a powerful force for good.
The very same platforms that were used to share the video were also used to mobilise outpourings of support.
But more needs to be done to prevent horrific content being uploaded.
Already there are AI techniques that we believe can be used to identify content such as this video, in the same way, that copyright infringements can be identified.
These must be prioritised as a matter of urgency.
For the most serious types of content, such as terrorist content, more onerous requirements should apply, such as proposed in Europe, including take down within a specified period, proactive measures and fines for failure to do so.
Consumers have the right to be protected whether using services funded by money or data.
Now is the time for this conversation to be had, and we call on all of you to join us at the table and be part of the solution.