16 years have passed since the life of Facebook and the number of active users of this platform has reached 2.7 billion per month, making it the largest social network in the world. Over the years, Facebook users’ information has been used for advertising by companies and even US presidential candidates without their permission, and in the wake of numerous scandals, even the young, wealthy and ambitious founder Mark Zuckerberg has been dragged to answer to the US Congress. .
If you watch series like Mr. The robot has taught us a lesson (except that our plans rarely go as we please because there are so many behind-the-scenes clusters), it is our extreme vulnerability in the context of social media that easily accesses the most detailed information of home and work, personal relationships, photos and Our messages are accessible and they can easily misuse this information for their own benefit.
But from time to time, there are people who use this amount of stored information for positive and useful purposes; An example is a group of researchers who recently Announced With the help of Facebook data and analysis of messages sent by some volunteer users to the platform up to 18 months before receiving a diagnosis from a specialist, they have been able to predict their mental illness.
How an artificial intelligence algorithm works to diagnose mental illness
For this purpose, 223 volunteers allowed the research team to access personal messages on Facebook. Using artificial intelligence algorithms, the team was able to predict whether these individuals had mood disorders (such as bipolar disorder or depression) and schizophrenia spectrum disorders by extracting specific features and information from the messages, as well as photos posted by volunteers on Facebook. Are or have no mental problems at all.
According to the results of this study, the use of profanity and obscene words in the text of messages was a sign of general mental illness and the use of perceptual words (such as seeing, feeling and hearing) and words related to negative emotions was a sign of schizophrenia. While analyzing the images, the team found that the colors of the blue spectrum were related to mood disorders.
To evaluate the success of this algorithm, the researchers used a common measurement in artificial intelligence that measures the balance between false positives and false negatives. A false positive is a report error in which the test result indicates the existence of a state that does not actually exist. A false negative is when the test result shows the absence of a state that actually exists.
The use of vulgar words in the message indicates mental illness
The more the algorithm classifies more people into a positive category (for example, determines that they are in the range of schizophrenia disorders), the less likely they are to ignore people who actually have schizophrenia (low false negative rate); But it mistakenly puts some healthy people in the category of schizophrenia disorders (high false positive rate). A flawless algorithm has neither false positive nor false negative at the same time; Such an algorithm is assigned a score of one. An algorithm that randomly guesses the results gets a score of 0.5.
The algorithm that the research team used to analyze Facebook’s data ranged from 0.65 to 0.77, depending on the predictions it wanted from the algorithm. Even when the team limited the data analysis to messages sent during the year before the official diagnosis, it was able to make these predictions significantly better than expected by chance.
According to the statement Andrew Schwartz, The assistant professor of computer science at Stony Brook University in New York, who was not present at the study, compared these scores with those of the Patient Health Questionnaire (PHQ-9 – Standard Survey and 10 Questions for Depression Screening). The results of this study suggest that it may be possible to use Facebook data in the screening of mental illness, and possibly much earlier than the diagnosis.
Michael Bernbaum, Assistant professor at the Feinstein Institute for Medical Research in New York and director of the project, believes that this model of artificial intelligence tool can make a big difference in the treatment of mental illness. According to him:
We now know that cancer has many different stages of development. If cancer is diagnosed early, the treatment process is very different from when it has metastasized and spread to different parts of the body. In psychology, it is common practice to start treatment when the disease has reached a stage of complete metastasis and transmission. But now it is possible to diagnose the disease in the early stages.
Bernabeu is not the first researcher to use social media data to predict mental disorder. Before him, other researchers used Facebook statuses، Tweets, And Reddit posts They have tried to diagnose a variety of mental illnesses, from depression to Attention Deficit Hyperactivity Disorder. But the research of Bernabeu and his team is significant because it has worked directly with people with a history of mental disorders.
Other researchers were generally unable to study people whose disease had been formally diagnosed and simply relied on their own words or asked them to diagnose the disease themselves or answer questionnaires such as PHQ-9. In contrast, everyone in the Bernbaum study received a diagnosis card from a psychiatrist; And because the researchers had access to the exact history of these diagnoses, they were able to focus their predictions on messages that the person had sent before learning of their illness.
The heirs of Guntoco, An assistant professor of computer science at the University of Pennsylvania who did not attend the study, warns that even if these algorithms do not achieve significant results, they can still not replace a specialist psychologist in diagnosing mental disorders. “I do not think there will be a time – at least as long as I live – when social media data is enough to diagnose the disease,” he said. “This is impossible.”
But an algorithm designed by Bernbaum and his team can play an important role in mental health. “What we’re looking for is to use this additional data to identify people at risk; “We want to see if these people need more care or contact with a specialist.”
Social media data may provide a more accurate picture of the patient’s mood
Schwartz points out that diagnosing mental illness is not an exact science; But it can be improved by adding more data. “Mental health cannot be assessed with just one tool,” he said.
Because social media provides a continuous history of a person’s thoughts and actions over a significant period of time, they can effectively complement the one-hour interviews that usually take place in a clinic to diagnose illness. According to Schwartz, in such interviews, “it is still up to the patient to remember everything about himself, and the psychologist must determine when this data is influenced by utility biases.” That is, the psychologist must recognize what the patient is saying because he or she thinks the psychologist wants to hear it. Perhaps data from social media can provide a more accurate picture of a patient’s mood.
Mental Illness Risk Alert Plugin
مونمون دوچادهری, A professor of interactive computing at the Georgia Institute of Technology who previously worked with Bernbaum but not in the present study, is thinking of a social media plugin that can alert users to the risk of mental illness. Of course, such a plugin immediately raises privacy concerns; Because if information about a person’s mental state is disclosed, it can be misused by insurance companies or employees or force the person to disclose his mental illness before he wants to.
In order for this plugin to be implemented, its creators must be completely transparent about how to access and protect user information. But if such an algorithm can detect the symptoms of mental illness a year and a half earlier, it will make a huge difference in a person’s life.
“If we can diagnose these symptoms in the early stages, we can use other mechanisms to address concerns that do not necessarily require a visit to the doctor,” says Duchadri.
The first result of a Google search for suicide-related terms is the Suicide Prevention Center number
According to Gontoco, “Facebook and Google are already doing this in some way.” کاربر If a user searches for suicide terms on Google, the National Suicide Prevention Center phone number will be displayed before any other links. Facebook uses artificial intelligence to detect posts that indicate a risk of suicide and send them to human observers for review. If observers confirm that the danger in this post is real, Facebook could send suicide prevention resources and information to the user or even notify the police.
The problem here is that suicide is a clear and imminent danger; But receiving a diagnosis of mental disorder often does not convey such a sense of danger to the user. To prevent suicide, social media users are more likely to share personal information than to be diagnosed with schizophrenia a little earlier.
Integration of digital data and mental health in the future
Bernbaum, however, considers the result of this research to be smaller than these dimensions but very effective. A psychologist himself, he believes that social media data can not only classify the diagnostic process for a more accurate assessment, but can also monitor patients undergoing lengthy treatments.
Thoughts, feelings, actions, these are all dynamic and constantly changing. Unfortunately, in psychology, at best, we can only access this information once a month. Using this information model allows us to have a more comprehensive understanding of people’s lives.
Researchers still have a long way to go to design such algorithms and come up with an ethical way to implement them. But Bernbaum hopes that over the next five to 10 years, information gained from social media will become a common part of psychiatric diagnosis.
“The day will come when digital data and novel health will really merge,” says Bernbaum. And this is just like taking an X-ray of people’s minds. “It is like taking a blood test to help with medical diagnoses and recommended interventions.”
What do you think about the use of artificial intelligence and social media data in the early detection of mental illness? چنین Is such a method really effective or does it make privacy issues practically ineffective?