Life Twitter profiles can be screened for mental illness: Bad news in the wrong hands

Twitter profiles can be screened for mental illness: Bad news in the wrong hands

In a clinical setting, a machine-based mental health screening tool would be invaluable. Photo: Getty
Twitter Facebook Reddit Pinterest Email

US researchers have used artificial intelligence to develop a formula that could assess Twitter users for mental illnesses.

However, where the scientists see a positive opportunity in using social media to screen for depression and anxiety in a closed, clinical setting, critics see a potential new avenue for abuse and exploitation by employers, divorcing spouses, insurance companies and money lenders.

Algorithms deconstructing profile pictures

According to a statement from the university, Penn researchers have found Twitter users with depression and anxiety were more likely to post pictures with lower aesthetic values and less vivid colours, particularly images in greyscale.

The researchers used algorithms to extract features such as colours, facial expressions, and different aesthetic measures (such as depth of field, symmetry, and lighting) from images posted by more than 4000 Twitter users who agreed to be a part of the study.

To quickly categorise their depression and anxiety scores, they analysed each person’s last 3200 tweets. Meanwhile, 887 users also completed a traditional survey to obtain depression and anxiety scores. Then, image features were correlated with users’ depression and anxiety scores. From this, several significant relationships emerged.

Your tweets could lead to the detection of a mental illness. Photo: Getty/TND

Useful for monitoring emotional ups and downs

They found that “users tended to suppress positive emotions rather than outwardly display more negative emotions,” such as keeping a straight face instead of outright frowning, in their profile pictures.

Depressed users often posted photos only of their own faces with no family, friends, or other people appearing in them.

“This tool is far from perfect to be used as a diagnostic tool,” the study’s lead author Dr Sharath Guntuku, a research scientist with Penn Medicine’s Centre for Digital Health, said.

“However, an automated machine learning tool could be a low-cost method for clinicians, with permission from their patients, to monitor their accounts and potentially detect elevated depression or anxiety levels,” Dr Guntuku said.

“The clinicians could then refer patients who were flagged by the tool for more formal screening methods.”

Could be extended to Instagram and texting

The study’s senior author, Lyle Ungar, PhD, a professor of Genomics and Computational Biology and Psychology, said: “Something like this could be applied to Instagram and text messaging, too. We hope this may give some insights into the different facets of depression. And we’re also looking at a variety of other conditions, from loneliness to ADHD.”

Kimberlee Weatherall is a Professor of Law at the University of Sydney Law School, and a member of the Australian Computer Society’s Technical Advisory Board on Artificial Intelligence Ethics.

In an email, Professor Weatherall detailed some of the trouble that could be caused if machine-based mental health screening was in the wrong hands such as:

  • Educational institutions when considering applications or offering scholarships;
  • Banks when deciding to offer a loan and the pricing of a loan;
  • And employers using tools of this kind, in the context of performance review or assessing promotion opportunities.

On the other hand, she suggested employers could use “information of this kind” to ensure appropriate support for workers.

Firms that seek to hire new employees could filter applicants or even those “who will see the advertisement of the position”.

Insurance companies, betting sites

Life and health insurance companies could use the information when considering how to price or whether to offer coverage.

“Or what about health insurance companies or life insurance companies seeking to argue that you have a health condition you haven’t disclosed to them – and possibly, one you don’t even know about?” she said.

Retail companies might look to target depressed individuals with retail offers.

Gambling companies could target potential addicts.

Internet trolls could taunt people online for amusement.

Ex-partners could use it as evidence in a custody fight.

“The dystopias aren’t hard to imagine. What to do about them is harder,” Professor Weatherall said.

“How information like this is used, and where we want to draw lines as a society … are all becoming much more urgent to resolve.”

The Penn research will be presented at the International AAAI Conference on Web and Social Media on June 11 to 14 in Munich.

  • Lifeline 13 11 14, beyondblue 1300 224 636

View Comments