The Gunman in New Zealand Livestreamed His Killing Spree, And Facebook Could do Nothing About it
The Gunman in New Zealand Livestreamed His Killing Spree, And Facebook Could do Nothing About it
This was not PUBG. Where was all the artificial intelligence (AI) that Silicon Valley keep harping on about?

When a gunman went on a rampage at two mosques in the New Zealand city of Christchurch, the last thing one expected to emerge from the horrific incident was what turned out to be the Live footage of the shooting spree. It was not recorded on a phone by a bystander or recorded on CCTV cameras at either mosque. Instead, it was livestreamed by the gunman himself, believed to be a 28-year old Australian Brenton Tarrant. Manifesto shared before he drove off, Livestreamed killing innocent people on Facebook, video shared by many on Twitter and more. That is before Silicon valley could react.

It is reported that he was wearing a GoPro Camera on himself, and the video footage of him driving up to the mosque, calmly walk in with weapons in hand and then go on a shooting spree are now available easily on the world wide web, including Facebook, Twitter and YouTube, if you look carefully. At last count according to reports, 40 people had been pronounced dead in the attack, and many are injured.

This is where the role of technology is questioned, and rightly so. Questions need to be asked of Facebook, which allowed the 17 minute Live stream to be broadcast on its platform, for millions to see. The millions that would include children of a very young age. Where was all the artificial intelligence (AI) that the boffins in Silicon Valley keep harping on about? The same AI that is expected to take over the world, make everything better and replace humans in almost every profession?

The video has since been taken down by Facebook. Nevertheless, the worst part is, the same video has since been shared on YouTube, is available on Twitter, on various video sharing platforms for easy viewing and downloading and chances are, will still remain easily accessible in some corner of the world wide web in the years to come.

Incidentally, Facebook has been trumpeting the steps it has taken to clean up the content on its platform. On February 4, in a post titled “What Is Facebook Doing to Address the Challenges It Faces?” as a part of what it calls the series asking “Hard Questions”, the company had said, “now have over 30,000 people working on safety and security — about half of whom are content reviewers working out of 20 offices around the world. Thanks to their work, along with our artificial intelligence and machine learning tools, we’ve made big strides in finding and removing content that goes against our Community Standards.” Big numbers, and yes, you mention AI in anything, and everything is expected to be robust, hunky dory and pristine. But Facebook didn’t leave anything to assumption either, when it comes to self-praise. The post went on to say, and I quote, “We’re now detecting 99% of terrorist related content before it’s reported, 97% of violence and graphic content, and 96% of nudity.”

Well, okay then. Clearly these 17 minutes of violent horror were perhaps not terrorising enough to be detected by the humans of Facebook and the AI of Facebook. These 17 minutes of violent horror were not violent enough for by the humans of Facebook and the AI of Facebook. And these 17 minutes of violent horror were not graphic enough to have been noted by the humans of Facebook and the AI of Facebook. No one is saying Facebook is supposed to pre-empt, that would be outlandish to ask. But if someone, a man or a machine working in the colossal empire that is Facebook had cut the cord on this livestream and shut this down, it surely would have been infinitely more responsible behaviour.

Facebook’s own house is not in order. In February, a report by The Verge had suggested that content moderators at Facebook themselves are so stressed after viewing graphic and violent content, that they are resorting to having sex and doing drugs at work to cope with the rigours of the job. Clearly, this is just not working out for anyone involved. Unfortunately, children can now see these 17 minutes of horror the next time they open a social media platform—just because Silicon Valley hasn’t been able to identify that the much touted AI is proving to be useless and that the humans who need to run the AI are themselves struggling to keep up.

This is not new for Facebook. As far back as April 2017, Facebook Live was by a man to stream how he killed his 11-month old baby girl, before he killed himself. Since then, there have been numerous live streams of murders and suicides on the social network’s Live platform.

It is not just Facebook to be blamed. The video has since been tweeted and retweeted many times over on Twitter, another social network that is constantly in the crosshairs of governments and regulators globally, for having failed to efficiently police content on its network. Twitter’s usage policy states, “You may not make specific threats of violence or wish for the serious physical harm, death, or disease of an individual or group of people” and also says “Twitter allows some forms of graphic violence and/or adult content in Tweets marked as containing sensitive media. However, you may not use such content in live video, your profile, or header images. Additionally, Twitter may sometimes require you to remove excessively graphic violence.” This is much like those lengthy terms and conditions documents that none of us bother to read—even though they were simplified in late 2017. Imagine.

In April last year, Twitter had claimed that it had banned 274,460 accounts for ‘promoting terrorism’, and had claimed its in-house content moderations systems were doing a great job.

If you thought the retweets were the only thing that Twitter is guilty of not checking, wait till you hear this. The user @BrentonTarrant shared a lengthy manifesto in a Tweet, before going on the crazed rampage. It spewed hatred against religion against a particular religion and talked about revenge for all the invasions by Islamic rulers in history, the enslavement of Europeans in history and the thousands of European lives lost due to terrorist attacks.

If you first thought this was a video from the popular video game PUBG while scrolling your Facebook timeline, you probably wouldn’t have been wrong in that assumption in the perfect world—but this isn’t the perfect world, and these were real people being shot point blank by the crazed gunman carrying some really sophisticated bullet spraying hardware. To snapshot this 17 minute video, the gunman drives up to a mosque, calmly gets out of the vehicle he drove up in, picks out his weapon of choice from the boot of his car, strolls inside the mosque and starts shooting at worshippers inside. In the middle of all this, he would even calmly reload his weapon while standing over them. This just wasn’t real.

What is real though is the collective failure of the Silicon Valley biggies to deal with real issues. And this 17 minute video is just testifying to that fact.

What's your reaction?

Comments

https://popochek.com/assets/images/user-avatar-s.jpg

0 comment

Write the first comment for this!