When we book a session with a mental health professional, there is a high degree of trust built into the interaction. We’re paying someone to listen to some of our deepest and darkest secrets, and privacy is expected as a matter of ethics and legality. Whatever you say in that room should never leave, with only rare exceptions.
That’s one of the fundamental principles that underpin the profession. But does this extend to online realms too?
Apps that offer guidance or put users in touch with professionals help democratize access to treatment for depression, addiction, anxiety, and much more. We’re no longer restricted to professionals—and the fees they dictate—in our immediate physical location.
Individuals who are uncomfortable discussing their problems in person might be more willing to turn to counselors over apps. In some ways, online services provide a greater sense of privacy—but reality could be far from the case.
[Interested in more privacy news? Sign up for the ExpressVPN newsletter.]
Interest in mental health has exploded this year, thanks to the emergence of Covid-19, which has led to a suffering economy, lots of time spent indoors, and greatly cutting back on getting to see friends and family. And apps offering help have reported increased sales.
But can we trust them with our privacy and data?
Not so confidential: Mental health apps track you
An in-depth analysis of prominent mental-health app Better Help, which had over half a million downloads in 2019 alone, exposed just how murky this industry can be. Better Help has an attractive model: For a flat rate of 40 USD per week, users can text, call, or video chat with a licensed counselor. It has used flashy marketing campaigns, signing on NBA players and YouTube influencers as spokespeople who call for destigmatizing mental-health treatment.
Better Help claims to encrypt the content of communications between patient and counselor, but there is a wealth of other data that it tracks. For example, its signup process asks for users’ gender, age, and sexual orientation, as well as questions like if you’ve had suicidal thoughts.
The study found that the app was relaying back metadata to “dozens of third parties,” including Facebook, Google, Snapchat, and Pinterest. Facebook was informed every time a user opened the app and how often they were attending counselor sessions. And while the specific contents of patient-counselor communication were not disclosed, Better Help did transmit metadata back to Facebook, including information on what time of day users were attending sessions, their location, and the amount of time spent on the app.
Some of the data was anonymized—but the fact is anonymized data sets aren’t really anonymous. They can be linked back to individual profiles and used to keep tabs on user behavior.
Mental health app usage has soared during Covid-19
Forth percent of American adults reported depression and anxiety as a result of the coronavirus pandemic, with many of them resorting to online wellness and mental health tools to cope. The enforcement of social-distancing measures as well as a general increased in technology use both contributed to this trend.
By some estimates, the top 20 mental health apps saw over 4 million downloads in April 2020 alone. Several app developers reported that their user base had doubled during lockdowns. Meanwhile, it is estimated that only 5% of mental health apps follow a research-based approach rooted in the principles of psychiatry.
What’s more, the FDA does not have any formal processes in place to vet mental health apps for the veracity of their advice or other safeguards. With 8 in 10 mental health apps susceptible to hacking and data theft, it’s clear that more work needs to be done to gain the confidence of end users and protect their right to privacy.
What’s the solution?
Formalized healthcare and psychiatric services are subject to strict regulatory standards. The processing of patient data in hospitals and primary care centers is also governed by regulations such as HIPAA. While the FDA and other government agencies can hold sway over doctors, clinics, and hospitals, these safeguards are nebulous in online marketplaces.
The app industry is perversely incentivized to incorporate fewer mechanisms for user privacy. The more data points they’re able to skim and feed to third-party tracking platforms, the better their targeting and advertising. And free apps can only monetize through invasive advertising and data tracking.
It’s not just data tracking that’s a problem with mental-health apps. In order to surge to the top of app stores, developers often have to make outsized claims of their effectiveness and scientific backing. A 2019 study revealed that only half of the apps claiming to follow scientific methods actually did, while approximately one-third referred to techniques for which no evidence could be found.
App developers aren’t going to be the ones driving more attention to privacy and security issues. Change has to come from the gatekeepers of the platforms they build apps on or through wider industry regulators.
Unless there are rapid improvements in the enforcement of user privacy at tech companies, it stands to reason that the status quo will not change. Healthcare regulation is robust but it simply lacks influence over online spaces.