• Menu
  • Skip to main content
  • Skip to primary sidebar

The Cyber Security News

Latest Cyber Security News

Header Right

  • Latest News
  • Vulnerabilities
  • Cloud Services
chatgpt macos flaw could've enabled long term spyware via memory function

ChatGPT macOS Flaw Could’ve Enabled Long-Term Spyware via Memory Function

You are here: Home / General Cyber Security News / ChatGPT macOS Flaw Could’ve Enabled Long-Term Spyware via Memory Function
September 25, 2024

A now-patched security vulnerability in OpenAI’s ChatGPT app for macOS could have made it possible for attackers to plant long-term persistent spyware into the artificial intelligence (AI) tool’s memory.

The technique, dubbed SpAIware, could be abused to facilitate “continuous data exfiltration of any information the user typed or responses received by ChatGPT, including any future chat sessions,” security researcher Johann Rehberger said.

The issue, at its core, abuses a feature called memory, which OpenAI introduced earlier this February before rolling it out to ChatGPT Free, Plus, Team, and Enterprise users at the start of the month.

✔ Approved From Our Partners
AOMEI Backupper Lifetime

Protect and backup your data using AOMEI Backupper. AOMEI Backupper takes secure and encrypted backups from your Windows, hard drives or partitions. With AOMEI Backupper you will never be worried about loosing your data anymore.

Get AOMEI Backupper with 72% discount from an authorized distrinutor of AOMEI: SerialCart® (Limited Offer).

➤ Activate Your Coupon Code


What it does is essentially allow ChatGPT to remember certain things across chats so that it saves users the effort of repeating the same information over and over again. Users also have the option to instruct the program to forget something.

Cybersecurity

“ChatGPT’s memories evolve with your interactions and aren’t linked to specific conversations,” OpenAI says. “Deleting a chat doesn’t erase its memories; you must delete the memory itself.”

The attack technique also builds on prior findings that involve using indirect prompt injection to manipulate memories so as to remember false information, or even malicious instructions, achieving a form of persistence that survives between conversations.

“Since the malicious instructions are stored in ChatGPT’s memory, all new conversation going forward will contain the attackers instructions and continuously send all chat conversation messages, and replies, to the attacker,” Rehberger said.

“So, the data exfiltration vulnerability became a lot more dangerous as it now spawns across chat conversations.”

ChatGPT macOS Flaw

In a hypothetical attack scenario, a user could be tricked into visiting a malicious site or downloading a booby-trapped document that’s subsequently analyzed using ChatGPT to update the memory.

The website or the document could contain instructions to clandestinely send all future conversations to an adversary-controlled server going forward, which can then be retrieved by the attacker on the other end beyond a single chat session.

Following responsible disclosure, OpenAI has addressed the issue with ChatGPT version 1.2024.247 by closing out the exfiltration vector.

“ChatGPT users should regularly review the memories the system stores about them, for suspicious or incorrect ones and clean them up,” Rehberger said.

“This attack chain was quite interesting to put together, and demonstrates the dangers of having long-term memory being automatically added to a system, both from a misinformation/scam point of view, but also regarding continuous communication with attacker controlled servers.”

The disclosure comes as a group of academics has uncovered a novel AI jailbreaking technique codenamed MathPrompt that exploits large language models’ (LLMs) advanced capabilities in symbolic mathematics to get around their safety mechanisms.

Cybersecurity

“MathPrompt employs a two-step process: first, transforming harmful natural language prompts into symbolic mathematics problems, and then presenting these mathematically encoded prompts to a target LLM,” the researchers pointed out.

The study, upon testing against 13 state-of-the-art LLMs, found that the models respond with harmful output 73.6% of the time on average when presented with mathematically encoded prompts, as opposed to approximately 1% with unmodified harmful prompts.

It also follows Microsoft’s debut of a new Correction capability that, as the name implies, allows for the correction of AI outputs when inaccuracies (i.e., hallucinations) are detected.

“Building on our existing Groundedness Detection feature, this groundbreaking capability allows Azure AI Content Safety to both identify and correct hallucinations in real-time before users of generative AI applications encounter them,” the tech giant said.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.


Some parts of this article are sourced from:
thehackernews.com

Previous Post: «transportation companies hit by cyberattacks using lumma stealer and netsupport Transportation Companies Hit by Cyberattacks Using Lumma Stealer and NetSupport Malware
Next Post: Agentic AI in SOCs: A Solution to SOAR’s Unfulfilled Promises agentic ai in socs: a solution to soar's unfulfilled promises»

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Report This Article

Recent Posts

  • Zero-Click Agentic Browser Attack Can Delete Entire Google Drive Using Crafted Emails
  • Critical XXE Bug CVE-2025-66516 (CVSS 10.0) Hits Apache Tika, Requires Urgent Patch
  • Chinese Hackers Have Started Exploiting the Newly Disclosed React2Shell Vulnerability
  • Intellexa Leaks Reveal Zero-Days and Ads-Based Vector for Predator Spyware Delivery
  • “Getting to Yes”: An Anti-Sales Guide for MSPs
  • CISA Reports PRC Hackers Using BRICKSTORM for Long-Term Access in U.S. Systems
  • JPCERT Confirms Active Command Injection Attacks on Array AG Gateways
  • Silver Fox Uses Fake Microsoft Teams Installer to Spread ValleyRAT Malware in China
  • ThreatsDay Bulletin: Wi-Fi Hack, npm Worm, DeFi Theft, Phishing Blasts— and 15 More Stories
  • 5 Threats That Reshaped Web Security This Year [2025]

Copyright © TheCyberSecurity.News, All Rights Reserved.