• Menu
  • Skip to main content
  • Skip to primary sidebar

The Cyber Security News

Latest Cyber Security News

Header Right

  • Latest News
  • Vulnerabilities
  • Cloud Services
servicenow ai agents can be tricked into acting against each

ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts

You are here: Home / General Cyber Security News / ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts
November 19, 2025

Malicious actors can exploit default configurations in ServiceNow’s Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks.

The second-order prompt injection, according to AppOmni, makes use of Now Assist’s agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive corporate data, modify records, and escalate privileges.

“This discovery is alarming because it isn’t a bug in the AI; it’s expected behavior as defined by certain default configuration options,” said Aaron Costello, chief of SaaS Security Research at AppOmni.

✔ Approved From Our Partners
AOMEI Backupper Lifetime

Protect and backup your data using AOMEI Backupper. AOMEI Backupper takes secure and encrypted backups from your Windows, hard drives or partitions. With AOMEI Backupper you will never be worried about loosing your data anymore.

Get AOMEI Backupper with 72% discount from an authorized distrinutor of AOMEI: SerialCart® (Limited Offer).

➤ Activate Your Coupon Code


“When agents can discover and recruit each other, a harmless request can quietly turn into an attack, with criminals stealing sensitive data or gaining more access to internal company systems. These settings are easy to overlook.”

DFIR Retainer Services

The attack is made possible because of agent discovery and agent-to-agent collaboration capabilities within ServiceNow’s Now Assist. With Now Assist offering the ability to automate functions such as help-desk operations, the scenario opens the door to possible security risks.

For instance, a benign agent can parse specially crafted prompts embedded into content it’s allowed access to and recruit a more potent agent to read or change records, copy sensitive data, or send emails, even when built-in prompt injection protections are enabled.

The most significant aspect of this attack is that the actions unfold behind the scenes, unbeknownst to the victim organization. At its core, the cross-agent communication is enabled by controllable configuration settings, including the default LLM to use, tool setup options, and channel-specific defaults where the agents are deployed –

  • The underlying large language model (LLM) must support agent discovery (both Azure OpenAI LLM and Now LLM, which is the default choice, support the feature)
  • Now Assist agents are automatically grouped into the same team by default to invoke each other
  • An agent is marked as being discoverable by default when published

While these defaults can be useful to facilitate communication between agents, the architecture can be susceptible to prompt injections when an agent whose main task is to read data that’s not inserted by the user invoking the agent.

“Through second-order prompt injection, an attacker can redirect a benign task assigned to an innocuous agent into something far more harmful by employing the utility and functionality of other agents on its team,” AppOmni said.

CIS Build Kits

“Critically, Now Assist agents run with the privilege of the user who started the interaction unless otherwise configured, and not the privilege of the user who created the malicious prompt and inserted it into a field.”

Following responsible disclosure, ServiceNow said the behavior is intended to be this way, but the company has since updated its documentation to provide more clarity on the matter. The findings demonstrate the need for strengthening AI agent protection, as enterprises increasingly incorporate AI capabilities into their workflows.

To mitigate such prompt injection threats, it’s advised to configure supervised execution mode for privileged agents, disable the autonomous override property (“sn_aia.enable_usecase_tool_execution_mode_override”), segment agent duties by team, and monitor AI agents for suspicious behavior.

“If organizations using Now Assist’s AI agents aren’t closely examining their configurations, they’re likely already at risk,” Costello added.

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


Some parts of this article are sourced from:
thehackernews.com

Previous Post: «fortinet warns of new fortiweb cve 2025 58034 vulnerability exploited in the Fortinet Warns of New FortiWeb CVE-2025-58034 Vulnerability Exploited in the Wild
Next Post: EdgeStepper Implant Reroutes DNS Queries to Deploy Malware via Hijacked Software Updates edgestepper implant reroutes dns queries to deploy malware via hijacked»

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Report This Article

Recent Posts

  • Zero-Click Agentic Browser Attack Can Delete Entire Google Drive Using Crafted Emails
  • Critical XXE Bug CVE-2025-66516 (CVSS 10.0) Hits Apache Tika, Requires Urgent Patch
  • Chinese Hackers Have Started Exploiting the Newly Disclosed React2Shell Vulnerability
  • Intellexa Leaks Reveal Zero-Days and Ads-Based Vector for Predator Spyware Delivery
  • “Getting to Yes”: An Anti-Sales Guide for MSPs
  • CISA Reports PRC Hackers Using BRICKSTORM for Long-Term Access in U.S. Systems
  • JPCERT Confirms Active Command Injection Attacks on Array AG Gateways
  • Silver Fox Uses Fake Microsoft Teams Installer to Spread ValleyRAT Malware in China
  • ThreatsDay Bulletin: Wi-Fi Hack, npm Worm, DeFi Theft, Phishing Blasts— and 15 More Stories
  • 5 Threats That Reshaped Web Security This Year [2025]

Copyright © TheCyberSecurity.News, All Rights Reserved.