• Menu
  • Skip to main content
  • Skip to primary sidebar

The Cyber Security News

Latest Cyber Security News

Header Right

  • Latest News
  • Vulnerabilities
  • Cloud Services
new tokenbreak attack bypasses ai moderation with single character text changes

New TokenBreak Attack Bypasses AI Moderation with Single-Character Text Changes

You are here: Home / General Cyber Security News / New TokenBreak Attack Bypasses AI Moderation with Single-Character Text Changes
June 12, 2025

Cybersecurity researchers have discovered a novel attack technique called TokenBreak that can be used to bypass a large language model’s (LLM) safety and content moderation guardrails with just a single character change.

“The TokenBreak attack targets a text classification model’s tokenization strategy to induce false negatives, leaving end targets vulnerable to attacks that the implemented protection model was put in place to prevent,” Kieran Evans, Kasimir Schulz, and Kenneth Yeung said in a report shared with The Hacker News.

Tokenization is a fundamental step that LLMs use to break down raw text into their atomic units – i.e., tokens – which are common sequences of characters found in a set of text. To that end, the text input is converted into their numerical representation and fed to the model.

✔ Approved Seller From Our Partners
Mullvad VPN Discount

Protect your privacy by Mullvad VPN. Mullvad VPN is one of the famous brands in the security and privacy world. With Mullvad VPN you will not even be asked for your email address. No log policy, no data from you will be saved. Get your license key now from the official distributor of Mullvad with discount: SerialCart® (Limited Offer).

➤ Get Mullvad VPN with 12% Discount


LLMs work by understanding the statistical relationships between these tokens, and produce the next token in a sequence of tokens. The output tokens are detokenized to human-readable text by mapping them to their corresponding words using the tokenizer’s vocabulary.

Cybersecurity

The attack technique devised by HiddenLayer targets the tokenization strategy to bypass a text classification model’s ability to detect malicious input and flag safety, spam, or content moderation-related issues in the textual input.

Specifically, the artificial intelligence (AI) security firm found that altering input words by adding letters in certain ways caused a text classification model to break.

Examples include changing “instructions” to “finstructions,” “announcement” to “aannouncement,” or “idiot” to “hidiot.” These small changes cause the tokenizer to split the text differently, but the meaning stays clear to both the AI and the reader.

What makes the attack notable is that the manipulated text remains fully understandable to both the LLM and the human reader, causing the model to elicit the same response as what would have been the case if the unmodified text had been passed as input.

By introducing the manipulations in a way without affecting the model’s ability to comprehend it, TokenBreak increases its potential for prompt injection attacks.

“This attack technique manipulates input text in such a way that certain models give an incorrect classification,” the researchers said in an accompanying paper. “Importantly, the end target (LLM or email recipient) can still understand and respond to the manipulated text and therefore be vulnerable to the very attack the protection model was put in place to prevent.”

The attack has been found to be successful against text classification models using BPE (Byte Pair Encoding) or WordPiece tokenization strategies, but not against those using Unigram.

“The TokenBreak attack technique demonstrates that these protection models can be bypassed by manipulating the input text, leaving production systems vulnerable,” the researchers said. “Knowing the family of the underlying protection model and its tokenization strategy is critical for understanding your susceptibility to this attack.”

“Because tokenization strategy typically correlates with model family, a straightforward mitigation exists: Select models that use Unigram tokenizers.”

To defend against TokenBreak, the researchers suggest using Unigram tokenizers when possible, training models with examples of bypass tricks, and checking that tokenization and model logic stays aligned. It also helps to log misclassifications and look for patterns that hint at manipulation.

The study comes less than a month after HiddenLayer revealed how it’s possible to exploit Model Context Protocol (MCP) tools to extract sensitive data: “By inserting specific parameter names within a tool’s function, sensitive data, including the full system prompt, can be extracted and exfiltrated,” the company said.

Cybersecurity

The finding also comes as the Straiker AI Research (STAR) team found that backronyms can be used to jailbreak AI chatbots and trick them into generating an undesirable response, including swearing, promoting violence, and producing sexually explicit content.

The technique, called the Yearbook Attack, has proven to be effective against various models from Anthropic, DeepSeek, Google, Meta, Microsoft, Mistral AI, and OpenAI.

“They blend in with the noise of everyday prompts — a quirky riddle here, a motivational acronym there – and because of that, they often bypass the blunt heuristics that models use to spot dangerous intent,” security researcher Aarushi Banerjee said.

“A phrase like ‘Friendship, unity, care, kindness’ doesn’t raise any flags. But by the time the model has completed the pattern, it has already served the payload, which is the key to successfully executing this trick.”

“These methods succeed not by overpowering the model’s filters, but by slipping beneath them. They exploit completion bias and pattern continuation, as well as the way models weigh contextual coherence over intent analysis.”

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.


Some parts of this article are sourced from:
thehackernews.com

Previous Post: «ai agents run on secret accounts — learn how to AI Agents Run on Secret Accounts — Learn How to Secure Them in This Webinar
Next Post: WordPress Sites Turned Weapon: How VexTrio and Affiliates Run a Global Scam Network wordpress sites turned weapon: how vextrio and affiliates run a»

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Report This Article

Recent Posts

  • Fortinet Releases Patch for Critical SQL Injection Flaw in FortiWeb (CVE-2025-25257)
  • PerfektBlue Bluetooth Vulnerabilities Expose Millions of Vehicles to Remote Code Execution
  • Securing Data in the AI Era
  • Critical Wing FTP Server Vulnerability (CVE-2025-47812) Actively Being Exploited in the Wild
  • Iranian-Backed Pay2Key Ransomware Resurfaces with 80% Profit Share for Cybercriminals
  • CISA Adds Citrix NetScaler CVE-2025-5777 to KEV Catalog as Active Exploits Target Enterprises
  • Critical mcp-remote Vulnerability Enables Remote Code Execution, Impacting 437,000+ Downloads
  • Fake Gaming and AI Firms Push Malware on Cryptocurrency Users via Telegram and Discord
  • Four Arrested in £440M Cyber Attack on Marks & Spencer, Co-op, and Harrods
  • What Security Leaders Need to Know About AI Governance for SaaS

Copyright © TheCyberSecurity.News, All Rights Reserved.