“It is unacceptable to treat the internet as an ungoverned space,” Morrison wrote in a letter to Japanese Prime Minister Shinzo Abe ahead of the upcoming G20 meeting in Osaka, Japan in June.
I’ve written to Japanese PM @AbeShinzo as G20 President to have the leaders of the world’s biggest economies ensure social media companies implement better safeguards to ensure their platforms can’t be exploited by terrorists or to spread hate speech. pic.twitter.com/LEQacLqSYi
— Scott Morrison (@ScottMorrisonMP) March 18, 2019
“It is imperative that the global community works together to ensure that technology firms meet their moral obligation to protect the communities which they serve and from which they profit,” Morrison added.
In a report on the incident, Facebook said that less than 200 people actually watched the carnage unfold live, but the archived video of the attack in which 50 people were killed and over 40 injured was then reportedly streamed some 4,000 times before it was eventually taken down. The first user complaint was lodged some 29 minutes after the attack had begun.
ALSO ON RT.COMShould social media be scrubbing disturbing NZ shooting videos? Pundits clash in RT DEBATEMorrison questioned social media giants’ ability to police their own platforms, especially in light of such graphic and disturbing content being shared so easily across multiple platforms, as a terrorist attack was underway.
The company claims it removed 1.5 million copies of the videos of the attack in the first 24 hours, 1.2 million of which were “blocked at upload.” YouTube was heavily criticized for its perceived failure to adequately quarantine and remove any clones of the mosque attack video.
“If they can write an algorithm to make sure that the ads they want you to see can appear on your mobile phone, then I’m quite confident they can write an algorithm to screen out hate content on social media platforms,” Morrison told reporters in Adelaide.
However, Morrison has already received criticism, with some dubbing his call to action “collateral censorship” amid fears of overreaction and knee-jerk regulations.
Whilst I agree that the social media giants should clean up their act (it’s frustrating to report hate speech over and over, only to have them say it doesn’t infringe their abysmally-low standards), you and your gov’t and the MSM have to stop fanning the flames of Islamophobia.
— WillowGhost (@TheWillowGhost) March 19, 2019
This is just an excuse for censoring a media thats not controlled by your donors…..you can’t safeguard a live stream without banning all streams.
— Ray (@Rayrobby11) March 19, 2019
I suppose you don’t need to worry as yours is protected under parliamentary privilege.
— ░C░S░M░ (@csmagor) March 18, 2019
Today you take down people making terroristic threats, tomorrow it could be ‘hate speech’ and the day after its anyone that has the “wrong” opinion. Removing content is a narrow line and you should tread very carefully.
— purlescent🇦🇺🇦🇺 (@BettlesEmpire) March 19, 2019
All hail 1984 https://t.co/XZOBeBmyGD
— Nobody Important (@MrHoffalicious) March 19, 2019
“It is a difficult task to moderate live content,” law professor and algorithm expert Frank Pasquale from the University of Maryland said, as quoted by ABC News.
“It’s not as easy as it’s being made out to be in terms of directly applying advertising algorithms to get rid of forbidden content or horrific content.”
In Facebook’s case, its some two billion users can initiate a livestream at the touch of a button. However, there is currently no known algorithm that governs livestream video the same way as traditional posts. Facebook already employs thousands of content moderators to sift through flagged videos on the platform, many of whom experience PTSD-like symptoms.
Morrison said the Australian government was already exploring “practical proposals in this area right now.”