Facebook announced Monday they will be using technology to try to prevent suicide.
The social media giant built artificial intelligence that can detect patterns in posts and videos that contain suicidal thoughts and feelings, something that the Idaho Suicide Prevention Hotline said on the surface is a good thing.
"They have such a big reach. So many people are interacting within that space and they would be really remiss in their responsibility if they were not doing something to address suicide in the social media space," said Idaho Suicide Prevention Hotline Director John Reusser.
While the intentions are good, Reusser said the way they are going about it is cause for some concern.
The intelligence can detect certain comments and posts like are you ok, or can I help and flag them as potential indicators.
"I'm concerned that an artificial intelligence may lack the nuance to distinguish between somebody venting, somebody processing some thoughts that may include the word suicide versus someone who is really having thoughts of suicide," said Reusser.
The flagged posts will be reviewed by Facebook employees who can choose to send resources to the user or alert local authorities which Reusser said is good, but he hopes those employees have proper training.
"I firmly believe it should be a real-life human being making those final decisions, making those really clinical decision around somebody's level of risk," said Reusser.
In the past, Facebook had been relying on users to alert it to troubling content but the company hopes by adding the new feature, it will speed up and prioritize how it handles incidents.