Big Tech Is Coming For Your Health Data. Here’s How To Protect Your Information.

Opinion: In the hands of a hostile government, what begins as ‘neutral’ medical data can quickly become a weapon. The post Big Tech Is Coming For Your Health Data. Here’s How To Protect Your Information. appeared first on Rewire News Group.

Big Tech Is Coming For Your Health Data. Here’s How To Protect Your Information.

A new national health data system launched by the Trump administration will dramatically expand corporations’ access to Americans’ medical records.

The system, announced by the White House on July 30, is a digital “ecosystem” that allows patients to opt in to store their health data and make it accessible across different apps and providers. This would allow not only doctors but also apps like Apple Health to directly pull in information from prescriptions, test results, and even fitness data to store in one place.

The system will be maintained and led by the Centers for Medicare and Medicaid Services (CMS). More than 60 companies—ranging from tech giants like Google and Amazon to health-care firms like CVS Health and UnitedHealth Group—have already committed to participate by developing the infrastructure and apps for the initiative or by providing patient data. OpenAI, the artificial intelligence company behind ChatGPT, is also slated to contribute, though it’s unclear how, exactly, each company is participating in the new system.

The administration asserts that this initiative will improve patients’ access to health records and foster innovation.

“We’re tearing down digital walls, returning power to patients, and rebuilding a health system that serves the people,” Health and Human Services Secretary Robert F. Kennedy Jr. said in a July 30 statement.

But the administration’s simultaneous gutting of federal health research funding will undermine those goals. The cuts limit scientists’ ability to conduct large-scale clinical trials, slow the development of new medical treatments, and reduce support for basic public health initiatives like vaccines.

There’s another risk to this tech-forward approach to health, too: data privacy. As an activist and privacy researcher who helps advocate for consumer-centric privacy policies at the American Civil Liberties Union (ACLU), here are the risks I discern in the Trump administration’s new health data system, and some concrete ways to keep health information secure.

Big Tech monetizes consumer health data

Big Tech has a financial interest in getting a hold of consumer health data because that information fuels the advertising that drives these companies’ business models.

Already, insights into users’ reproductive health, medical conditions, or mental health is being drawn from their browsing habits, location data, and app usage, and used to target people with ads for costly medications or fertility treatments. Some advertisers even peddle unproven wellness products to chronically-ill people.

In 2021, users of the period tracking app Flo sued the company after Wall Street Journal reporting revealed that Flo had embedded Facebook software that shared sensitive personal information—like when a user was on their period or marked their intention to become pregnant—with Facebook. The FTC also filed a complaint against Flo, which was settled in 2021.

In August 2025, a California jury found Meta, Facebook’s parent company, liable for violating state privacy laws because it surreptitiously and intentionally collected sensitive people’s reproductive health information from Flo without their consent.

Similarly, Amazon’s 2023 acquisition of the primary care chain One Medical has raised new risks regarding Big Tech’s access to sensitive personal information.

Amazon One Medical pushes patients to sign an agreement giving Amazon access to their complete patient file. And it’s not clear that information is always kept private. A 2024 wrongful death lawsuit alleged that nine Amazon/One Medical employees unlawfully viewed a deceased patient’s medical records after his passing drew media attention.

A legal gap

The Flo and Amazon examples underscore the dangers of trusting corporations with sensitive health information—especially when their core business models depend on monetizing consumer data.

The White House’s new health-care data program would give tech companies unprecedented access to personal health data without providing meaningful new protections for consumers. And HIPAA—the federal Health Insurance Portability and Accountability Act that protects health information shared with doctors, hospitals, and insurers—doesn’t apply once that same information is collected by third-party apps, tech companies, or other entities who are not considered health-care providers.

That means if you enter sensitive data into a fertility tracker or buy vitamins through Amazon, those records aren’t covered by HIPAA protections, even though they can reveal intimate details about your health.

This legal gap creates an opportunity for technology companies to monetize sensitive medical information without sufficient oversight, by using it to target advertising or feed into an artificial intelligence system that develops new health tools.

Companies could analyze health information to target ads, for instance, promoting weight-loss products to someone with obesity-related conditions, exploiting people’s health struggles and reinforcing stigma for monetary gain. Or companies might sell health-related insights to insurers, who could adjust premiums or coverage decisions for those patients based on predicted risks—penalizing the very patients most in need of care.

Widening existing disparities

Beyond the risks to any one individual, the creation of a massive health data-sharing “ecosystem” threatens to deepen systemic harms. For example, data from the program could further normalize and perpetuate discrimination against communities that have historically been the targets of surveillance, such as Black and brown women.

Here’s how that could happen: If tech companies use the data gathered on these communities to train their artificial intelligence systems, these datasets would be skewed by disparities in access to care, quality of treatment, and historical medical bias—because the same communities that have been disproportionately surveilled have also been historically mistreated by the medical system.

As a result, predictive algorithms trained on biased data can misdiagnose, deprioritize, or overlook marginalized patients. This effectively amplifies longstanding inequities.

Users are already seeing targeted ads based on their personal medical information. In 2022, a WIRED report found that a writer using third-party pregnancy tracking apps quickly turned her data into targeted advertising and disinformation. Within minutes of signing up, she got emails from WebMD and Pottery Barn Kids, as well as ads for expensive, discretionary post-birth services like cord blood banking.

Similarly, the Federal Trade Commission fined a telehealth firm called Cerebral more than $7 million in April 2024 for using its patients’ sensitive personal health information for third-party advertising purposes.

The Trump administration’s initiative threatens to accelerate these harms dramatically, scaling up risks that will disproportionately hit the most marginalized.

And consumer health information may not just stay within corporations—it could make its way into the courts.

Big Tech companies have complied with subpoenas and government demands for information about users under criminal or civil investigation, often with little regard for consumer rights or civil liberties. In states where abortion is illegal, law enforcement could use menstrual tracking data, pharmacy records, or digital communications between patients and providers to build criminal cases.

For example, Meta’s Facebook messages were key in prosecuting a Nebraska mother for helping her daughter seek an abortion in 2022.

Even when data remains anonymous, collecting so much information on so many people’s reproductive health and making it broadly available opens the door to abuse.

In the aggregate, that data could be used as evidence to support restrictive political policies, like bans on medication abortion. It could also bolster challenges to contraceptive access by giving lawmakers and interest groups statistical justification to argue that these services are overused, unsafe, or morally objectionable.

LGBTQ+ people could face devastating consequences if sensitive health details, such as their HIV status or history of receiving gender-affirming care, were exposed, too. That information has historically led to discrimination in employment, housing, and even access to care.

The point is: In the hands of a hostile government, what begins as “neutral” medical data can quickly become a weapon.

6 steps to protect your information

Here are some practical steps to protect your health data:

  • Avoid sharing private health info online: Social media direct messages often collect details that can be shared with law enforcement. Don’t say anything on Facebook or TikTok that you wouldn’t want shared widely.
  • Use encrypted communication: Sensitive medical discussions, if they have to occur online, should occur over secure, encrypted platforms. Signal, an open-source platform, is safer than WhatsApp, which is owned by Meta. iMessage is end-to-end encrypted only if both users have iPhones.
  • Exercise your existing data rights: Twenty states have comprehensive consumer privacy laws allowing individuals to access, delete, or transfer their browsing history, social security numbers, and other personal data. In six other states, users have narrower privacy rights, such as opting not to have their data sold to data brokers or advertisers. Consumers living in states with privacy laws can exercise their rights by submitting data subject access requests.
  • Scrutinize terms of service: Patients should ask how health-care providers and pharmacies may share their data and avoid indiscriminately agreeing to app permissions. Opt out of targeted advertising or sales of data when possible, and read the fine print from any third-party apps or providers.
  • Limit use of third-party apps: Conduct a digital hygiene assessment of your commonly-used health apps. This can include reviewing all app permissions on your phone, using services like DeleteMe to remove your information from data brokers, reading through apps’ privacy policies (privacy policy checklists online can help), and considering alternatives to apps you use by understanding where your data might go.
  • Consider offline health tracking: When tracking menstrual cycles or taking pregnancy notes, traditional journals or encrypted digital files might provide greater protection.

Individual caution isn’t a sufficient substitute for systemic shields. But until legislators pass comprehensive federal privacy safeguards and restrict the commercial sale of health information, we’re all responsible for keeping our data safe.

The post Big Tech Is Coming For Your Health Data. Here’s How To Protect Your Information. appeared first on Rewire News Group.

What's Your Reaction?

like

dislike

love

funny

angry

sad

wow