Facebook screens posts for suicide risk, and health experts have concerns

CNN/Stylemagazine.com Newswire | 2/12/2019, 8:48 a.m.
A pair of public health experts has called for Facebook to be more transparent in the way it screens posts ...
A pair of public health experts has called for Facebook to be more transparent in the way it screens posts for suicide risk and to follow certain ethical guidelines, including informed consent among users.

By Jacqueline Howard, CNN

(CNN) -- A pair of public health experts has called for Facebook to be more transparent in the way it screens posts for suicide risk and to follow certain ethical guidelines, including informed consent among users.

The social media giant details its suicide prevention efforts online and says it has helped first responders conduct thousands of wellness checks globally, based on reports received through its efforts. The authors said Facebook's trial to reduce death by suicide is "innovative" and that it deserves "commendation for its ambitious goal of using data science to advance public health."

But the question remains: Should Facebook change the way it monitors users for suicide risk?

'People need to be aware that ... they may be experimented on'

Since 2006, Facebook has worked on suicide prevention efforts with experts in suicide prevention and safety, according to the company.

In 2011, Facebook partnered with the National Suicide Prevention Lifeline to launch suicide prevention efforts, including enabling users to report suicidal content they may see posted by a friend on Facebook. The person who posted the content would receive an email from Facebook encouraging them to call the National Suicide Prevention Lifeline or chat with a crisis worker.

In 2017, Facebook expanded those suicide prevention efforts to include artificial intelligence that can identify posts, videos and Facebook Live streams containing suicidal thoughts or content. That year, the National Suicide Prevention Lifeline said it was proud to partner with Facebook and that the social media company's innovations allow people to reach out for and access support more easily.

"It's important that community members, whether they're online or offline, don't feel that they are helpless bystanders when dangerous behavior is occurring," John Draper, director of the National Suicide Prevention Lifeline, said in a press release in 2017. "Facebook's approach is unique. Their tools enable their community members to actively care, provide support, and report concerns when necessary."

When AI tools flag potential self-harm, those posts go through the same human analysis as posts reported by Facebook users directly.

The move to use AI was part of an effort to further support at-risk users. The company had faced criticism for its Facebook Live feature, with which some users have live-streamed graphic events including suicide.

In a blog post, Facebook detailed how AI looks for patterns on posts or in comments that may contain references to suicide or self-harm. According to Facebook, comments like "Are you OK?" and "Can I help?" can be an indicator of suicidal thoughts.

If AI or another Facebook user flags a post, the company reviews it. If the post is determined as needing immediate intervention, Facebook may work with first responders, such as police departments to send help.

Yet an opinion paper published Monday in the journal Annals of Internal Medicine claims that Facebook lacks transparency and ethics in its efforts to screen users' posts, identify those who appear at risk for suicide and alert emergency services of that risk.