Facebook has developed an artificial intelligence software that aims to prevent suicide and they are calling it “proactive detection.” It works by identifying patterns of posts and live streams that express thoughts of suicide, though Facebook admits that their software still has much room to grow. Signals from the comments section from friends, like “Are you ok?” and “Can I help?” are taken into the algorithm’s account.
Proactive detection has been in use over the past several months as Facebook has offered tools to resolve conflict online, help friends who are expressing suicidal thoughts, and offered many different resources help people get in touch with help hotlines. If proactive detection does see something it doesn’t like it may contact first responders who have specific training in suicide and self-harm to deal with the situation.
Facebook has claimed that last month they worked in conjunction with first responders on over 100 “wellness checks” based on reports received through proactive detection. In some cases, the first responder has gotten there as the person in danger was broadcasting. That’s a tremendous response time considering the many that have taken their lives on Facebook Live.
Vice President of Facebook, Guy Rosen, had this to say “Facebook is a place where friends and family are already connected and we are able to help connect a person in distress with people who can support them. It’s part of our ongoing effort to help build a safe community on and off Facebook.”
This AI technology will now be put in place outside all over the world except for the EU, where laws are put in place to protect an individual’s data for the purposes of profiling. Obviously, it’s great that lives will be saved, but this decision does raise some questions about the software’s all-encompassing nature and its potential for abuse. But nobody wants to be the person who comes out against suicide prevention, so we have to take Facebook’s word that it won’t abuse its power.