Last Friday’s uncontrolled spread of horrific videos on the Christchurch mosque attack and a propaganda coup for espousing hateful ideologies raised questions about social media. The tech companies scrambled to take action on time due to the speed and volume of content which was uploaded, reuploaded and shared by the users worldwide.
In Washington and Silicon Valley, the incident crystallized growing concerns about the extent to which government and market forces have failed to check the power of social media.
The failure highlighted the social media companies struggle to police content that are massively lucrative and persistently vulnerable to outside manipulation despite years of promises to do better.
After the white supremacist live-streamed the attack and uploaded the video to Facebook, Twitter, YouTube, and other platforms across the internet. These tech companies faced back lashes from the media and internet users worldwide, to an extent where they were regarded as complicit in promoting white supremacism too. In response to the backlash, Google and Facebook provides status report on what they went through when the video was reported, the kind of challenges they faced and what are the next steps to combat such incidents in future.
Google in an email to Motherboard says it employs 10,000 people across to moderate the company’s platforms and products. They also described a process they would follow when a user reports a piece of potentially violating content—such as the attack video; which is
Further Google adds that they want to preserve journalistic or educational coverage of the event, but does not want to allow the video or manifesto itself to spread throughout the company’s services without additional context.
Google at some point had taken the unusual step of automatically rejecting any footage of violence from the attack video, cutting out the process of a human determining the context of the clip. If, say, a news organization was impacted by this change, the outlet could appeal the decision, Google commented.
“We made the call to basically err on the side of machine intelligence, as opposed to waiting for human review,” YouTube’s Product Officer Neal Mohan told the Washington Post in an article published Monday.
Google also tweaked the search function to show results from authoritative news sources. It suspended the ability to search for clips by upload date, making it harder for people to find copies of the attack footage.
"Since Friday’s horrific tragedy, we’ve removed tens of thousands of videos and terminated hundreds of accounts created to promote or glorify the shooter," a YouTube spokesperson said.
“Our teams are continuing to work around the clock to prevent violent and graphic content from spreading, we know there is much more work to do,” the statement added.
Facebook on Wednesday also shared an update on how they have been working with the New Zealand Police to support their investigation. It provided additional information on how their products were used to circulate videos and how they plan to improve them.
So far Facebook has provided the following information:
As there were questions asked to Facebook about why artificial intelligence (AI) didn’t detect the video automatically. Facebook says AI has made massive progress over the years to proactively detect the vast majority of the content it can remove. But it’s not perfect.
“To achieve that we will need to provide our systems with large volumes of data of this specific kind of content, something which is difficult as these events are thankfully rare.” says Guy Rosen VP Product Management at Facebook.
Guy further adds, “AI is an incredibly important part of our fight against terrorist content on our platforms, and while its effectiveness continues to improve, it is never going to be perfect. People will continue to be part of the equation, whether it’s the people on our team who review content, or people who use our services and report content to us. That’s why last year Facebook more than doubled the number of people working on safety and security to over 30,000 people, including about 15,000 content reviewers to report content that they find disturbing.”
According to Motherboard, Google saw an unprecedented number of attempts to post footage from the attack, sometimes as fast as a piece of content per second. But the challenge they faced was to block access to the killer’s so-called manifesto, a 74-page document that spouted racist views and explicit calls for violence.
Google described the difficulties of moderating the manifesto, pointing to its length and the issue of users sharing the snippets of the manifesto that Google’s content moderators may not immediately recognise.
“The manifesto will be particularly challenging to enforce against given the length of the document and that you may see various segments of various lengths within the content you are reviewing,” says Google.
A source with knowledge of Google’s strategy for moderating the New Zealand attack material said this can complicate moderation efforts because some outlets did use parts of the video and manifesto. UK newspaper The Daily Mail let readers download the terrorist’s manifesto directly from the paper’s own website, and Sky News Australia aired parts of the attack footage, BuzzFeed News reported.
On the other hand Facebook faces a challenge to automatically discern such content from visually similar, innocuous content. For example if thousands of videos from live-streamed video games are flagged by the systems, reviewers could miss the important real-world videos where they could alert first responders to get help on the ground.
Another challenge for Facebook is similar to what Google faces, which is the proliferation of many different variants of videos makes it difficult for the image and video matching technology to prevent spreading further.
Facebook found that a core community of bad actors working together to continually re-upload edited versions of the video in ways designed to defeat their detection. Second, a broader set of people distributed the video and unintentionally made it harder to match copies. Websites and pages, eager to get attention from people seeking out the video, re-cut and re-recorded the video into various formats.
In total, Facebook found and blocked over 800 visually-distinct variants of the video that were circulating.
Both companies seem to be working hard to improve their products and gain user’s trust and confidence back.
How social media enabled and amplified the Christchurch terrorist attack
Google to be the founding member of CDF (Continuous Delivery Foundation)
Google announces the stable release of Android Jetpack Navigation