hiawatha bray | tech lab

Crime time on Facebook Live?

(FILES) This file photo taken on December 28, 2016 shows logos of US online social media and social networking service Facebook in Vertou, France. Facebook on April 6, 2017 ramped up its fight against "fake news" by adding tips on how to tell when shared stories are bogus. An initiative being launched in the US, France and a dozen other countries added an educational tool in an "awareness display" in news feeds at the leading online social network. / AFP PHOTO / LOIC VENANCELOIC VENANCE/AFP/Getty Images
LOIC VENANCE/AFP/Getty Images/file 2016

Perhaps this time, it will be different.

In its brief existence, Facebook Live has hosted millions of amusing and innocent live videos, but also dozens of awful crimes, including assaults and murders broadcast as they happened. As of early March, at least 50 crimes from around the world have been broadcast this way, according to a report in the Wall Street Journal. Facebook admits the problem and vows to do better, but nearly every week has delivered another atrocity.

Now comes the horrific case of the Cleveland man — I refuse to type his name — who on Easter Sunday posted a video of himself murdering a helpless elderly man, followed by a Facebook Live broadcast of his confession. This time the disgust and outrage seems more intense, more heartfelt. Maybe this time, Facebook will do something about this.


But what, exactly? Our options are sadly limited, but we’re not completely helpless. It’ll take a mix of better laws, better technology, and better behavior to clean up this mess.

Get Talking Points in your inbox:
An afternoon recap of the day’s most important business news, delivered weekdays.
Thank you for signing up! Sign up for more newsletters here

Am I calling for a law against the publication of violent Internet content? Hardly. I’m a big fan of a federal law that does exactly the opposite. The 1996 Communications Decency Act declares that Internet companies can’t be held liable for the documents or images posted by their users. Internet sites can be ordered to take down illegal material, such as pirated music and movies. But the companies are generally protected from liability, even if their users publish all manner of awful stuff — fraudulent business scams, libelous e-mails, even child pornography. The law can go after the actual bad guys, but not the Internet company that hosted their activities.

So what’s needed is a law to increase penalties against crimes displayed on Internet video. If people are convicted of crimes they broadcast live, or if they post an after-the-fact video of the deed, they would get extra years in prison, just as someone can do extra time for a crime motivated by racism. A few high-profile sentences against such perpetrators might inspire second thoughts in other criminals.

Next, Facebook needs better ways to identify and pounce on this kind of hateful online material before it spreads too far. In a statement Monday, Facebook said it’s using artificial intelligence techniques to identify and limit the sharing of violent videos. The company didn’t respond to my request for more details on how its systems work.

Instead, I spoke to David Luan, former computer vision specialist at Bedford-based iRobot Corp., and founder of Dextro, a Seattle company that uses artificial intelligence to recognize images that appear inside live video streams. Dextro was recently acquired by Axon, which makes body cameras for police departments, and has no connection to Facebook. But its software, or something like it, could make it easier to clean up
Facebook Live.


Luan said Dextro software can identify the contents of live video streams in real time.

For example, it knows instantly if it’s being fed images of a football game, a golf outing, or a boxing match.

It can also recognize if someone in a video is holding a gun. Whenever that happens, “it would give you a warning,” said Luan. “Then you can just send this pool of potentially violent videos to a curation team.” The human curators would make the final call on whether to shut down the video stream.

Even with an assist from artificial intelligence, monitoring thousands of Facebook Live videos seems like an overwhelming task. But Ben Edelman, a computer security researcher and associate professor at Harvard Business School, believes it is a manageable problem.

“There are ways to do it smarter that could make a real difference,” said Edelman.


For instance, Facebook could ignore videos posted by frequent users who’ve never made trouble, while prioritizing videos from newcomers, or ones with tags or titles suggesting violent content, or ones with a spike in popularity.

‘Argu-ments about what can’t be done ring a little bit hollow.’

“Facebook has more money than God,” said Edelman. “Arguments about what can’t be done ring a little bit hollow.”

The rest of us also bear some responsibility. By definition, if you see a crime on Facebook Live, you’re an eyewitness. Act like one. Shoot photos or videos of the screen with your cellphone. Copy the Internet address of the Facebook page that’s running the video. Don’t share the video with others, and don’t warn the perpetrator. Instead, contact Facebook immediately, and then call 911.

Facebook Live has done far more good than harm, and for every toxic video it delivers, there are millions of innocent ones.

But that’s not much comfort to the victims of broadcast crime. For them, the best Facebook and the rest of us can offer is a vow to do better.

Hiawatha Bray can be reached at Follow him on Twitter @GlobeTechLab.