Social Media, the New Megaphone for Violent Perpetrators

Tashfeen Malik, the woman involved in this week’s Southern California mass shooting, has another claim to notoriety: She’s the latest in a growing line of extremists and disturbed killers who have used social media to punctuate their horrific violence.
Social Media, the New Megaphone for Violent Perpetrators
A group of men embrace in prayer outside the crime scene where the suspects in the shooting at the Inland Resource Center were killed on Dec. 3, 2015, in San Bernardino, Calif. Police continue to investigate a mass shooting at the Inland Regional Center in San Bernardino that left at least 14 people dead and another 17 injured. Sean M. Haffey/Getty Images
The Associated Press
Updated:

PALO ALTO, California—Tashfeen Malik, the woman involved in this week’s Southern California mass shooting, has another claim to notoriety: She’s the latest in a growing line of extremists and disturbed killers who have used social media to punctuate their horrific violence.

A Facebook official said Friday that Malik, using an alias, praised the Islamic State (ISIS) in a Facebook post shortly before—or during—the attack. Malik’s posting echoes similar bids for attention by violent perpetrators, including a disgruntled Virginia broadcaster who recorded himself shooting two co-workers and then posted the video online and a Florida man who killed his wife and shared a photo of her body on social media.

Facebook, Twitter, YouTube and other social media companies do their best to block or remove posts that glorify violence. But experts say it’s an uphill battle, and the advent of new services that let people stream live video from any event will only make the task more challenging.

“Now everyone has the opportunity to talk to a larger audience,” said Karen North, a professor of digital social media at the University of Southern California’s Annenberg School. “If you commit an act and you want people to know about it, you now have a way to promote it.”

Social media didn’t invent extremist violence. But ISIS and similar groups have become deft at using social media to spread their message, both to recruit followers and to threaten their perceived enemies. “They can rapidly and easily identify others who share their beliefs,” said Marcus Thomas, a former assistant director of the FBI’s operational technology division.

Like many young adults, the 27-year-old Malik and her 28-year-old husband, Syed Farook, seemed comfortable with social media. A U.S. intelligence official said Farook had been in contact with known Islamic extremists online. But there is no sign anyone from ISIS communicated with Malik or provided any guidance for the attack on a San Bernardino social service center, which left 14 people dead and 21 wounded.

YouTube, Twitter and other online services use automated software to help detect posts that violate their terms of service, including those that depict or encourage violence. They also encourage users to report such material, so it can be reviewed and removed.

Facebook declined comment Friday. But the page containing statements posted by the woman involved in this week’s San Bernardino shootings was taken down. Malik and her husband, Syed Farook, died hours after the attack in a gun battle with police.

The social network has done “a fairly good job of making sure that users understand” that posts or videos glorifying violence will be taken down, said Stephen Balkam, head of the nonprofit Family Online Safety Institute, which works with Facebook and other sites to promote safe practices for children.

Still, he cautioned: “All the policies in the world won’t help” unless companies also devote staff and resources to enforcing them. Even then, he said, it’s not always easy to determine whether taking something down is the right thing to do.

Two years ago, Balkam publicly criticized Facebook when the giant social network reversed its own decision to take down a graphic video of a masked man beheading a woman. In that case, Facebook said it decided to allow the video because users were sharing it as a way of condemning the violence attributed to Mexican drug gangs. But the company eventually concluded the post was too offensive and removed it again.

Another problem: Violent posts can resurface even after they are taken down. When a fired TV reporter with a grudge killed two former co-workers in Virginia over the summer, he videotaped his own actions and then uploaded the clip to Facebook. The company took it down, but not before someone else had copied it and re-posted it on other sites, North said.

Facebook explicitly bans content being shared by “dangerous organizations” engaged in terrorist activity or organized crime. But even that requires a judgment call, because not everyone around the world defines terrorism in the same way, said David Greene, civil liberties director for the Electronic Frontier Foundation, a digital rights group.

“Most of these areas are more gray than black or white, and that can put these companies in a very difficult position,” Greene said.

Lawmakers in the U.S. Senate recently considered a bill that would require social media companies to report any “terrorist activity” they found on their site to government authorities. Opponents questioned whether private companies were qualified to decide what constitutes terrorist activity. Tech representatives also warned the bill would have resulted in excessive reports to law enforcement and an overload of unhelpful data. The provision was later dropped.

Given the pervasiveness of social media, it’s perhaps no surprise that some criminals have posted evidence of their own acts. Authorities say teenagers in Illinois, Michigan and California have posted clips of themselves committing rape and assault—apparently to brag to their friends. Law enforcement officials say Florida resident Derek Medina posted a photo of his wife’s body on Facebook with a note accusing her of abusing him. He was convicted of second-degree murder this year.

Dealing with these problems is inherent for any social network, said Brian Blau, a tech analyst with Gartner. “They are in the business of connecting people and, unfortunately, there are a lot of terrible people in the world.”

And with the advent of live-streaming apps like Meerkat and Twitter’s Periscope service, safety advocates like Balkam worry that someone will use them to broadcast violence as it occurs. Facebook is also testing a similar service, which lets anyone broadcast live smartphone video to the world.

That will up the ante for social media companies, which will need to expand their systems for users to report violent content as it’s streaming, as well as their ability to respond.

“We’re talking in real time, stuff that you broadcast will have to be reported and taken down in a matter of seconds or minutes,” Balkam said.