Close banner

2022-10-16 06:01:38 By : Ms. Lulu Chiu

Thank you for visiting You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

You can also search for this author in PubMed   Google Scholar

You have full access to this article via your institution.

The incidence of suicide is rising in the United States among people aged 15-34. CREDIT: Justin Sullivan/Getty

A growing number of researchers and tech companies are beginning to mine social media for warning signs of suicidal thoughts. Their efforts build on emerging evidence that the language patterns of a person's social-media posts, as well as the subconscious ways they interact with their smartphone can hint at psychiatric trouble.

Businesses are just starting to test programs to automatically detect such signals. Mindstrong, for instance, an app developer in Palo Alto, California, is developing and testing machine-learning algorithms to correlate the language that people use and their behaviour — such as scrolling speed on smartphones — with symptoms of depression and other mental disorders. Next year, the company will expand its research to focus on behaviour associated with suicidal thoughts, which could eventually help health-care providers detect patients’ intentions to harm themselves more quickly. And in late November, Facebook announced that in much of the world, it was rolling out its own automated suicide-prevention tools. Tech giants Apple and Google are pursuing similar ventures.

Some mental-health professionals hope such tools could help reduce the number of people who attempt suicide, which is rising in the United States, where it is the second leading cause of death among people between the ages of 15 and 34. And young people are more likely to reach out for help on social media than to see a therapist or call a crisis hotline, according to social-work researcher Scottye Cash at Ohio State University in Columbus.

But Cash and other experts are concerned about privacy and the companies’ limited transparency, especially given the lack of evidence that digital interventions work. Facebook users deserve to know how their information is being used and to what effect, says Megan Moreno, a paediatrician at the University of Wisconsin in Madison. “How well are these [tools] working? Are they causing harm, are they saving lives?”

Those at risk of suicide are difficult to identify, at least in the short term, and so suicide is difficult to prevent, says Matthew Nock, a psychologist at Harvard University in Cambridge, Massachusetts. According to Nock, most people who attempt suicide deny considering it when talking to mental-health professionals1. Social media, however, provides a window into their emotions in real time. “We can bring the lab to the person,” Nock says.

Machine-learning algorithms can help researchers and counsellors discern when an emotional social media post is a joke, an expression of normal angst, or a real suicide threat by identifying patterns a human might miss, says Bob Filbin, chief data scientist at the Crisis Text Line in New York City.

By analysing 54 million messages on Crisis Text Line, which enables people to talk to counsellors by text message, Filbin and his colleagues have seen that people who are contemplating ending their lives rarely use the word ‘suicide’; words like ‘ibuprofen’ or ‘bridge’ are better indicators of suicidal thoughts. With the help of such insights, Filbin claims that Crisis Text Line’s counsellors can usually determine within three messages whether they should alert emergency responders to an imminent threat

Mindstrong’s president, Thomas Insel, says that collecting “passive” data from a person’s devices may be more informative than having them answer a questionnaire, for instance. Mindstrong’s app would be installed on a patient’s phone by a mental health-care provider and would run in the background, collecting data. After it develops a profile of an individual’s typical digital behaviour, would be able to detect worrying changes, Insel says. The company has partnered with health-care companies to help users access medical care when the app spots trouble.

“I'm not too confident that anything requiring someone to open an app will be that useful in a moment of crisis,” Insel says.

The larger question may be how — and when — to intervene. False positives are likely to remain high, Nock says, and so doctors or companies using technology to detect suicide risk will have to decide what level of certainty is required before sending help.

There is also little evidence that resources such as suicide hotlines save lives, and interventions can backfire by making people feel more vulnerable, Moreno says. For instance, she and others have found that people sometimes block friends who report social-media posts containing possible suicide threats, making friends less likely to report them in the future2.

Facebook’s new suicide-prevention programme relies heavily on user reports, as well as proprietary algorithms that scan posts for ‘red flags’, then contacts the user or alerts a human moderator. The moderator decides whether to provide links to resources such as Crisis Text Line or notify emergency responders.

But the company would not provide details on how the algorithms or moderators do their work, however, nor would the spokesperson specify whether it will follow up with users to validate the algorithms or evaluate the efficacy of intervention. In a statement, a Facebook spokesperson said that the tools were developed “in collaboration with experts” and that users cannot opt out of the service.

The company's tight-lipped approach has left some researchers concerned. “They have a responsibility to base all their decisions on evidence,” Cash says. Yet the company is providing little information that outside experts can use to judge its programme.

Nevertheless, Insel is glad that Facebook is trying. “You have to put this in the context of what we do now," he says, “which is not working.”


Correction 18 December 2017: An earlier version of this story stated incorrectly that in Facebook’s system, moderators may contact other people in a user’s network in response to signs of suicidal thoughts.

Busch, K. A. et al. J. Clin. Psychiatry 64, 14–19 (2003).

Gritton, J. et al. Am. Indian Alsk. Native Ment. Health Res. 24, 63–87 (2017).

Former US mental-health chief leaves Google for start-up

Mental health: There’s an app for that

Faster MRI scan captures brain activity in mice

Neurons that control walking go round in circles

Human brain organoids influence rat behaviour

Psychedelic medicine faces the acid test

Taking the tripping out of psychedelic medicine

Psychedelic microdosing hits a rough patch in clinical trials

Should AI have a role in assessing research quality?

Harnessing AI and robotics for science and society

Growth in AI and robotics research accelerates

The University of British Columbia (UBC)

Jülich Research Centre (FZJ)

Max Delbrück Center for Molecular Medicine (MDC)

You have full access to this article via your institution.

Former US mental-health chief leaves Google for start-up

Mental health: There’s an app for that

An essential round-up of science news, opinion and analysis, delivered to your inbox every weekday.

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Nature (Nature) ISSN 1476-4687 (online) ISSN 0028-0836 (print)