• Menu
  • Skip to main content
  • Skip to primary sidebar

The Cyber Security News

Latest Cyber Security News

Header Right

  • Latest News
  • Vulnerabilities
  • Cloud Services
openai launches chatgpt health with isolated, encrypted health data controls

OpenAI Launches ChatGPT Health with Isolated, Encrypted Health Data Controls

You are here: Home / General Cyber Security News / OpenAI Launches ChatGPT Health with Isolated, Encrypted Health Data Controls
January 8, 2026

Artificial intelligence (AI) company OpenAI on Wednesday announced the launch of ChatGPT Health, a dedicated space that allows users to have conversations with the chatbot about their health.

To that end, the sandboxed experience offers users the optional ability to securely connect medical records and wellness apps, including Apple Health, Function, MyFitnessPal, Weight Watchers, AllTrails, Instacart, and Peloton, to get tailored responses, lab test insights, nutrition advice, personalized meal ideas, and suggested workout classes.

The new feature is rolling out for users with ChatGPT Free, Go, Plus, and Pro plans outside of the European Economic Area, Switzerland, and the U.K.

✔ Approved From Our Partners
AOMEI Backupper Lifetime

Protect and backup your data using AOMEI Backupper. AOMEI Backupper takes secure and encrypted backups from your Windows, hard drives or partitions. With AOMEI Backupper you will never be worried about loosing your data anymore.

Get AOMEI Backupper with 72% discount from an authorized distrinutor of AOMEI: SerialCart® (Limited Offer).

➤ Activate Your Coupon Code


“ChatGPT Health builds on the strong privacy, security, and data controls across ChatGPT with additional, layered protections designed specifically for health — including purpose-built encryption and isolation to keep health conversations protected and compartmentalized,” OpenAI said in a statement.

Cybersecurity

Stating that over 230 million people globally ask health and wellness-related questions on the platform every week, OpenAI emphasized that the tool is designed to support medical care, not replace it or be used as a substitute for diagnosis or treatment.

The company also highlighted the various privacy and security features built into the Health experience –

  • Health operates in silo with enhanced privacy and its own memory to safeguard sensitive data using “purpose-built” encryption and isolation
  • Conversations in Health are not used to train OpenAI’s foundation models
  • Users who attempt to have a health-related conversation in ChatGPT are prompted to switch over to Health for additional protections
  • Health information and memories is not used to contextualize non-Health chats
  • Conversations outside of Health cannot access files, conversations, or memories created within Health
  • Apps can only connect with users’ health data with their explicit permission, even if they’re already connected to ChatGPT for conversations outside of Health
  • All apps available in Health are required to meet OpenAI’s privacy and security requirements, such as collecting only the minimum data needed, and undergo additional security review for them to be included in Health

Furthermore, OpenAI pointed out that it has evaluated the model that powers Health against clinical standards using HealthBench⁠, a benchmark the company revealed in May 2025 as a way to better measure the capabilities of AI systems for health, putting safety, clarity, and escalation of care in focus.

Cybersecurity

“This evaluation-driven approach helps ensure the model performs well on the tasks people actually need help with, including explaining lab results in accessible language, preparing questions for an appointment, interpreting data from wearables and wellness apps, and summarizing care instructions,” it added.

OpenAI’s announcement follows an investigation from The Guardian that found Google AI Overviews to be providing false and misleading health information. OpenAI and Character.AI are also facing several lawsuits claiming their tools drove people to suicide and harmful delusions after confiding in the chatbot. A report published by SFGate earlier this week detailed how a 19-year-old died of a drug overdose after trusting ChatGPT for medical advice.

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


Some parts of this article are sourced from:
thehackernews.com

Previous Post: «cisa flags microsoft office and hpe oneview bugs as actively CISA Flags Microsoft Office and HPE OneView Bugs as Actively Exploited

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Report This Article

Recent Posts

  • OpenAI Launches ChatGPT Health with Isolated, Encrypted Health Data Controls
  • CISA Flags Microsoft Office and HPE OneView Bugs as Actively Exploited
  • Black Cat Behind SEO Poisoning Malware Campaign Targeting Popular Software Searches
  • Critical n8n Vulnerability (CVSS 10.0) Allows Unauthenticated Attackers to Take Full Control
  • Webinar: Learn How AI-Powered Zero Trust Detects Attacks with No Files or Indicators
  • n8n Warns of CVSS 10.0 RCE Vulnerability Affecting Self-Hosted and Cloud Versions
  • The Future of Cybersecurity Includes Non-Human Employees
  • Veeam Patches Critical RCE Vulnerability with CVSS 9.0 in Backup & Replication
  • Microsoft Warns Misconfigured Email Routing Can Enable Internal Domain Phishing
  • Ongoing Attacks Exploiting Critical RCE Vulnerability in Legacy D-Link DSL Routers

Copyright © TheCyberSecurity.News, All Rights Reserved.