Charities are contributing to the growing mistrust of texting mental health crisis lines — here’s why

0

Like many sectors of society, mental health care has changed drastically as a result of the pandemic. Forced to adapt to a growing demand for counseling and crisis services, mental health charities have had to rapidly expand their digital services to meet the needs of their users.

Unfortunately, some charities are having growing pains as they transition into an unfamiliar environment that increasingly involves the use of data-driven technologies like machine learning — a type of artificial intelligence.

Recently, two charities faced public backlash when they used machine learning and processed data from users who had contacted their mental health support services during a crisis.

When word got out that US-based Crisis Text Line shared user data Another organization specializing in the development of machine learning technologies – Loris AI – has seen many critical reactions on social media, denouncing the commercialization of sensitive data as a shocking breach of trust. In response Crisis Text Line ended his data-sharing relationship with Loris AI and asked the company to delete the data sent.

A few weeks later, it emerged that Shout, the UK’s biggest crisis text line, had shared similarly anonymized data with researchers at Imperial College London and used machine learning to analyze patterns in the data. Again, this data comes from the deeply personal and sensitive conversations between those in need and the charity’s volunteer advisors.

One of the main reasons for this partnership was to see what could be learned from the anonymized conversations between Shout users and employees. To study this, the research team used machine learning techniques to uncover personal details about the users from the conversation text. including age and non-binary gender.

The information derived from the machine learning algorithms is not sufficient to personally identify individual users. However, many users were outraged when they learned how their data was being used. With the social media spotlight focused on them, scream answered:

We take the privacy of our copywriters incredibly seriously and work to the highest standards of data security. … We have always been fully transparent that we will use anonymized data and insights from Shout both to improve the service so we can better serve your needs and to improve mental health in the UK.

Undoubtedly, Shout was transparent in some way – they pointed users to it permissive privacy policy before accessing their service. But as we all know, these are guidelines rarely readand they should not be relied upon as meaningful forms of user consent in times of crisis.

So it’s a shame to see that charities like Shout and Crisis Text Line fail to recognize how their actions can contribute to a growing culture of distrust, especially as they provide essential support in a climate where mental illness is rampant ascending and public services are stretched as a result of underfunding.

Undoubtedly, Shout was transparent in some way – they pointed users to it permissive privacy policy before accessing their service. But as we all know, these are guidelines rarely readand they should not be relied upon as meaningful forms of user consent in times of crisis.

So it’s a shame to see that charities like Shout and Crisis Text Line fail to recognize how their actions can contribute to a growing culture of distrust, especially as they provide essential support in a climate where mental illness is rampant ascending and public services are stretched as a result of underfunding.

Editor’s Note: This article is part of a partnership The Chronicle has partnered with The Conversation to expand coverage of philanthropy and nonprofits. The three organizations are supported in this work by the Lilly Foundation. This article is republished from the conversation under a Creative Commons license.

Share.

Comments are closed.