• Menu
  • Skip to main content
  • Skip to primary sidebar

The Cyber Security News

Latest Cyber Security News

Header Right

  • Latest News
  • Vulnerabilities
  • Cloud Services
langchain, langgraph flaws expose files, secrets, databases in widely used

LangChain, LangGraph Flaws Expose Files, Secrets, Databases in Widely Used AI Frameworks

You are here: Home / General Cyber Security News / LangChain, LangGraph Flaws Expose Files, Secrets, Databases in Widely Used AI Frameworks
March 27, 2026

Cybersecurity researchers have disclosed three security vulnerabilities impacting LangChain and LangGraph that, if successfully exploited, could expose filesystem data, environment secrets, and conversation history.

Both LangChain and LangGraph are open-source frameworks that are used to build applications powered by Large Language Models (LLMs). LangGraph is built on the foundations of LangChain for more sophisticated and non-linear agentic workflows. According to statistics on the Python Package Index (PyPI), LangChain, LangChain-Core, and LangGraph have been downloaded more than 52 million, 23 million, and 9 million times last week alone.

“Each vulnerability exposes a different class of enterprise data: filesystem files, environment secrets, and conversation history,” Cyera security researcher Vladimir Tokarev said in a report published Thursday.

✔ Approved Seller From Our Partners
Mullvad VPN Discount

Protect your privacy by Mullvad VPN. Mullvad VPN is one of the famous brands in the security and privacy world. With Mullvad VPN you will not even be asked for your email address. No log policy, no data from you will be saved. Get your license key now from the official distributor of Mullvad with discount: SerialCart® (Limited Offer).

➤ Get Mullvad VPN with 12% Discount


Cybersecurity

The issues, in a nutshell, offer three independent paths that an attacker can leverage to drain sensitive data from any enterprise LangChain deployment. Details of the vulnerabilities are as follows –

  • CVE-2026-34070 (CVSS score: 7.5) – A path traversal vulnerability in LangChain (“langchain_core/prompts/loading.py”) that allows access to arbitrary files without any validation via its prompt-loading API by supplying a specially crafted prompt template.
  • CVE-2025-68664 (CVSS score: 9.3) – A deserialization of untrusted data vulnerability in LangChain that leaks API keys and environment secrets by passing as input a data structure that tricks the application into interpreting it as an already serialized LangChain object rather than regular user data.
  • CVE-2025-67644 (CVSS score: 7.3) – An SQL injection vulnerability in LangGraph SQLite checkpoint implementation that allows an attacker to manipulate SQL queries through metadata filter keys and run arbitrary SQL queries against the database.

Successful exploitation of the aforementioned flaws could allow an attacker to read sensitive files like Docker configurations, siphon sensitive secrets via prompt injection, and access conversation histories associated with sensitive workflows. It’s worth noting that details of CVE-2025-68664 were also shared by Cyata in December 2025, giving it the cryptonym LangGrinch.

The vulnerabilities have been patched in the following versions –

  • CVE-2026-34070 – langchain-core >=1.2.22
  • CVE-2025-68664 – langchain-core 0.3.81 and 1.2.5
  • CVE-2025-67644 – langgraph-checkpoint-sqlite 3.0.1

The findings once again underscore how artificial intelligence (AI) plumbing is not immune to classic security vulnerabilities, potentially putting entire systems at risk.

Cybersecurity

The development comes days after a critical security flaw impacting Langflow (CVE-2026-33017, CVSS score: 9.3) has come under active exploitation within 20 hours of public disclosure, enabling attackers to exfiltrate sensitive data from developer environments.

Naveen Sunkavally, chief architect at Horizon3.ai, said the vulnerability shares the same root cause as CVE-2025-3248, and stems from unauthenticated endpoints executing arbitrary code. With threat actors moving quickly to exploit newly disclosed flaws, it’s essential that users apply the patches as soon as possible for optimal protection.

“LangChain doesn’t exist in isolation. It sits at the center of a massive dependency web that stretches across the AI stack. Hundreds of libraries wrap LangChain, extend it, or depend on it,” Cyera said. “When a vulnerability exists in LangChain’s core, it doesn’t just affect direct users. It ripples outward through every downstream library, every wrapper, every integration that inherits the vulnerable code path.”

Found this article interesting? Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.


Some parts of this article are sourced from:
thehackernews.com

Previous Post: «china linked red menshen uses stealthy bpfdoor implants to spy via China-Linked Red Menshen Uses Stealthy BPFDoor Implants to Spy via Telecom Networks

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Report This Article

Recent Posts

  • LangChain, LangGraph Flaws Expose Files, Secrets, Databases in Widely Used AI Frameworks
  • China-Linked Red Menshen Uses Stealthy BPFDoor Implants to Spy via Telecom Networks
  • Claude Extension Flaw Enabled Zero-Click XSS Prompt Injection via Any Website
  • Masters of Imitation: How Hackers and Art Forgers Perfect the Art of Deception Mar 26, 2026 Artificial Intelligence / Threat Detection Unmasking impostors is something the art world has faced for decades, and there are valuable lessons from the works of Elmyr de Hory that can apply to the world of defensive cybersecurity. During the 1960s, de Hory gained infamy as a premier forger, passing off counterfeit masterworks of Picasso, Matisse, and Renoir to unsuspecting collectors and renowned museums. Over the next several decades, more than a thousand of his works slipped past experts who relied on trusted signatures, familiar patterns, and reputable provenance. It’s not unlike the challenges SOCs are facing now. We’re firmly in the Age of Imitation. Cyberattackers, equipped with AI, are mastering the art of imitating the familiar, posing as trusted users and masking their activity within legitimate processes and ordinary network traffic. As history shows, it’s often easier to identify impostors when you know what to look for. Key takeaways for defenders: Mimicry is the new normal: 81% of attacks are malware-free Ag…
  • ThreatsDay Bulletin: PQC Push, AI Vuln Hunting, Pirated Traps, Phishing Kits & 20 More Stories
  • Coruna iOS Kit Reuses 2023 Triangulation Exploit Code in New Mass Attacks
  • [Webinar] Stop Guessing. Learn to Validate Your Defenses Against Real Attacks
  • WebRTC Skimmer Bypasses CSP to Steal Payment Data from E-Commerce Sites
  • LeakBase Admin Arrested in Russia Over Massive Stolen Credential Marketplace
  • GlassWorm Malware Uses Solana Dead Drops to Deliver RAT and Steal Browser, Crypto Data

Copyright © TheCyberSecurity.News, All Rights Reserved.