If you were seeking online therapy from 2017 to 2021—and a lot of people were—chances are good that you found your way to BetterHelp, which today describes itself as the world’s largest online-therapy purveyor, with more than 2 million users. Once you were there, after a few clicks, you would have completed a form—an intake questionnaire, not unlike the paper one you’d fill out at any therapist’s office: Are you new to therapy? Are you taking any medications? Having problems with intimacy? Experiencing overwhelming sadness? Thinking of hurting yourself? BetterHelp would have asked you if you were religious, if you were LGBTQ, if you were a teenager. These questions were just meant to match you with the best counselor for your needs, small text would have assured you. Your information would remain private.
Except BetterHelp isn’t exactly a therapist’s office, and your information may not have been completely private. In fact, according to a complaint brought by federal regulators, for years, BetterHelp was sharing user data—including email addresses, IP addresses, and questionnaire answers—with third parties, including Facebook and Snapchat, for the purposes of targeting ads for its services. It was also, according to the Federal Trade Commission, poorly regulating what those third parties did with users’ data once they got them. In July, the company finalized a settlement with the FTC and agreed to refund $7.8 million to consumers whose privacy regulators claimed had been compromised. (In a statement, BetterHelp admitted no wrongdoing and described the alleged sharing of user information as an “industry-standard practice.”)
We leave digital traces about our health everywhere we go: by completing forms like BetterHelp’s. By requesting a prescription refill online. By clicking on a link. By asking a search engine about dosages or directions to a clinic or pain in chest dying???? By shopping, online or off. By participating in consumer genetic testing. By stepping on a smart scale or using a smart thermometer. By joining a Facebook group or a Discord server for people with a certain medical condition. By using internet-connected exercise equipment. By using an app or a service to count your steps or track your menstrual cycle or log your workouts. Even demographic and financial data unrelated to health can be aggregated and analyzed to reveal or infer sensitive information about people’s physical or mental-health conditions.
All of this information is valuable to advertisers and to the tech companies that sell ad space and targeting to them. It’s valuable precisely because it’s intimate: More than perhaps anything else, our health guides our behavior. And the more these companies know, the easier they can influence us. Over the past year or so, reporting has found evidence of a Meta tracking tool collecting patient information from hospital websites, and apps from Drugs.com and WebMD sharing search terms such as herpes and depression, plus identifying information about users, with advertisers. (Meta has denied receiving and using data from the tool, and Drugs.com has said that it was not sharing data that qualified as “sensitive personal information.”) In 2021, the FTC settled with the period and ovulation app Flo, which has reported having more than 100 million users, after alleging that it had disclosed information about users’ reproductive health with third-party marketing and analytics services, even though its privacy policies explicitly said that it wouldn’t do so. (Flo, like BetterHelp, said that its agreement with the FTC wasn’t an admission of wrongdoing and that it didn’t share users’ names, addresses, or birthdays.)
Of course, not all of our health information ends up in the hands of those looking to exploit it. But when it does, the stakes are high. If an advertiser or a social-media algorithm infers that people have specific medical conditions or disabilities and subsequently excludes them from receiving information on housing, employment, or other important resources, this limits people’s life opportunities. If our intimate information gets into the wrong hands, we are at increased risk of fraud or identity theft: People might use our data to open lines of credit, or to impersonate us to get medical services and obtain drugs illegally, which can lead not just to a damaged credit rating, but also to canceled insurance policies and denial of care. Our sensitive personal information could even be made public, leading to harassment and discrimination.
Many people believe that their health information is private under the federal Health Insurance Portability and Accountability Act, which protects medical records and other personal health information. That’s not quite true. HIPAA only protects information collected by “covered entities” and their “business associates”: Health-insurance companies, doctors, hospitals, and some companies that do business with them are limited in how they collect, use, and share information. A whole host of companies that handle our health information—including social-media companies, advertisers, and the majority of health tools marketed directly to consumers—aren’t covered at all.
“When somebody downloads an app on their phone and starts inputting health data in it, or data that might be health indicative, there are definitely no protections for that data other than what the app has promised,” Deven McGraw, a former deputy director of health-information privacy in the Office for Civil Rights at the Department of Health and Human Services, told me. (McGraw currently works as the lead for data stewardship and data sharing at the genetic-testing company Invitae.) And even then, consumers have no way of knowing if an app is following its stated policies. (In the case of BetterHelp, the FTC complaint points out that from September 2013 to December 2020, the company displayed seals saying HIPAA on its website—despite the fact that “no government agency or other third party reviewed [its] information practices for compliance with HIPAA, let alone determined that the practices met the requirements of HIPAA.”)
Companies that sell ads are often quick to point out that information is aggregated: Tech companies use our data to target swaths of people based on demographics and behavior, rather than individuals. But those categories can be quite narrow: Ashkenazi Jewish women of childbearing age, say, or men living in a specific zip code, or people whose online activity may have signaled interest in a specific disease, according to recent reporting. Those groups can then be served hyper-targeted pharmaceutical ads at best, and unscientific “cures” and medical disinformation at worst. They can also be discriminated against: Last year, the Department of Justice settled with Meta over allegations that the latter had violated the Fair Housing Act in part by allowing advertisers to not show housing ads to users who Facebook’s data-collection machine had inferred were interested in topics including “service animal” and “accessibility.”
Recent settlements have demonstrated an increased interest on the part of the FTC in regulating health privacy. But that and most of its other actions are carried out via a consent order, or a settlement approved by the commissioners, whereby the two parties resolve a dispute without an admission of wrongdoing (as happened with both Flo and BetterHelp). If a company appears to have violated the terms of a consent decree, a federal court can then investigate. But the agency has limited enforcement resources. In 2022, a coalition of privacy and consumer advocates wrote a letter to the chairs and ranking members of the House and Senate appropriations committees, urging them to increase funding for the FTC. The commission requested $490 million for fiscal year 2023, up from the $376.5 million it received in 2022, pointing to stark increases in consumer complaints and reported consumer fraud. It ultimately received $430 million.
For its part, the FTC has created an interactive tool to help app creators be in compliance with the law as they build and market their products. And HHS’s Office for Civil Rights has provided guidance on the uses of online tracking technologies by HIPAA-covered entities and business associates. This may head off privacy issues before apps cause harm.
The nonprofit Center for Democracy & Technology has also put together its own proposed consumer-privacy framework in response to the fact that “extraordinary amounts of information reflecting mental and physical well-being are created and held by entities that are not bound by HIPAA obligations.” The framework emphasizes appropriate limits on the collection, disclosure, and use of health data as well as information that can be used to make inferences about a person’s physical or mental health. It moves the burden off consumers, patients, and users—who, it notes, may already be burdened with their health condition—and places it on the entities collecting, sharing, and using the information. It also limits data use to purposes that people anticipate and want, not ones they don’t know about or aren’t comfortable with.
But that framework is, for the time being, just a suggestion. In the absence of comprehensive federal data-privacy legislation that accounts for all the new technologies that now have access to our health information, our most intimate data are governed by a ragged patchwork of laws and regulations that are no match for the enormous companies that benefit from having access to those data—or for the very real needs that drive patients to use these tools in the first place. Patients enter their symptoms into search engines or fill out online questionnaires or download apps not because they don’t care, or aren’t thinking, about their privacy. They do these things because they want help, and the internet is the easiest or fastest or cheapest or most natural place to go for it. Tech-enabled health products provide an undeniable service, especially in a country plagued by health disparities. They’re unlikely to get less popular. It’s time the laws designed to protect our health information caught up.