As evidence, the official pointed to actions taken by Facebook this spring during the Christchurch terrorist attack—a massacre of more than 50 Muslim worshipers in New Zealand that was streamed in real-time over the internet by a shooter using Facebook Live. “Our systems didn’t work the way we wanted them to after Las Vegas,” said Monika Bickert, head of global policy management at Facebook, when she was pressed about the spread of misinformation and extremist content by Rep.
Seated alongside Google and Twitter officials on Capitol Hill, Bicker testified that Facebook’s crisis response had “gotten better” since the 2017 shooting, pointing specifically to Facebook’s response to the Christchurch terrorist attack this March. “With Christchurch, you had these companies at the table and others communicating real-time, sharing with one another, URLS, new versions of the video of the attack,” she said, noting that Facebook stopped 1. 2 million versions of the video from being spread across the platform.
Questioned over Facebook’s efforts to stop the spread of false information in the aftermath of the 2017 Las Vegas shooting, a senior Facebook official called to testify before the House Committee on Homeland Security on Wednesday said the company has since improved the way it handles viral acts of bloodshed and terrorism.
On Wednesday, Titus recalled how the Las Vegas shooting was followed by a wave of hoaxes, conspiracy theories, and misinformation, including widely shared Facebook posts that misidentified the gunman and his religious affiliation. “They peddled false information claiming the shooter was associated with some kind of anti-Trump army,” she said.
Copies and edited versions of the videos subsequently spread across the site and onto Twitter, YouTube, and other platforms. “One of the things we saw after Christchurch that was concerning was people uploading content to prove the event had happened,” said Nick Pickles, senior strategist of public policy at Twitter.