• Menu
  • Skip to main content
  • Skip to primary sidebar

The Cyber Security News

Latest Cyber Security News

Header Right

  • Latest News
  • Vulnerabilities
  • Cloud Services
deepseek ai database exposed: over 1 million log lines, secret

DeepSeek AI Database Exposed: Over 1 Million Log Lines, Secret Keys Leaked

You are here: Home / General Cyber Security News / DeepSeek AI Database Exposed: Over 1 Million Log Lines, Secret Keys Leaked
January 30, 2025

Buzzy Chinese artificial intelligence (AI) startup DeepSeek, which has had a meteoric rise in popularity in recent days, left one of its databases exposed on the internet, which could have allowed malicious actors to gain access to sensitive data.

The ClickHouse database “allows full control over database operations, including the ability to access internal data,” Wiz security researcher Gal Nagli said.

The exposure also includes more than a million lines of log streams containing chat history, secret keys, backend details, and other highly sensitive information, such as API Secrets and operational metadata. DeepSeek has since plugged the security hole following attempts by the cloud security firm to contact them.

✔ Approved Seller From Our Partners
Mullvad VPN Discount

Protect your privacy by Mullvad VPN. Mullvad VPN is one of the famous brands in the security and privacy world. With Mullvad VPN you will not even be asked for your email address. No log policy, no data from you will be saved. Get your license key now from the official distributor of Mullvad with discount: SerialCart® (Limited Offer).

➤ Get Mullvad VPN with 12% Discount


Cybersecurity

The database, hosted at oauth2callback.deepseek[.]com:9000 and dev.deepseek[.]com:9000, is said to have enabled unauthorized access to a wide range of information. The exposure, Wiz noted, allowed for complete database control and potential privilege escalation within the DeepSeek environment without requiring any authentication.

This involved leveraging ClickHouse’s HTTP interface to execute arbitrary SQL queries directly via the web browser. It’s currently unclear if other malicious actors seized the opportunity to access or download the data.

“The rapid adoption of AI services without corresponding security is inherently risky,” Nagli said in a statement shared with The Hacker News. “While much of the attention around AI security is focused on futuristic threats, the real dangers often come from basic risks—like the accidental external exposure of databases.”

“Protecting customer data must remain the top priority for security teams, and it is crucial that security teams work closely with AI engineers to safeguard data and prevent exposure.”

DeepSeek AI DatabaseDeepSeek AI Database

DeepSeek has become the topic du jour in AI circles for its groundbreaking open-source models that claim to rival leading AI systems like OpenAI, while also being efficient and cost-effective. Its reasoning model R1 has been hailed as “AI’s Sputnik moment.”

The upstart’s AI chatbot has raced to the top of the app store charts across Android and iOS in several markets, even as it has emerged as the target of “large-scale malicious attacks,” prompting it to temporarily pause registrations.

In an update posted on January 29, 2025, the company said it has identified the issue and that it’s working towards implementing a fix.

At the same time, the company has also been at the receiving end of scrutiny about its privacy policies, not to mention its Chinese ties becoming a matter of national security concern for the United States.

Cybersecurity

Furthermore, DeepSeek’s apps became unavailable in Italy shortly after the country’s data protection regulator requested information about its data handling practices and where it obtained its training data. It’s not known if the withdrawal of the apps was in response to questions from the watchdog.

Bloomberg, The Financial Times, and The Wall Street Journal have also reported that both OpenAI and Microsoft are probing whether DeepSeek used OpenAI’s application programming interface (API) without permission to train its own models on the output of OpenAI’s systems, an approach referred to as distillation.

“We know that groups in [China] are actively working to use methods, including what’s known as distillation, to try to replicate advanced US AI models,” an OpenAI spokesperson told The Guardian.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.


Some parts of this article are sourced from:
thehackernews.com

Previous Post: «unpatched php voyager flaws leave servers open to one click rce Unpatched PHP Voyager Flaws Leave Servers Open to One-Click RCE Exploits
Next Post: SOC Analysts – Reimagining Their Role Using AI soc analysts reimagining their role using ai»

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Report This Article

Recent Posts

  • Fortinet Releases Patch for Critical SQL Injection Flaw in FortiWeb (CVE-2025-25257)
  • PerfektBlue Bluetooth Vulnerabilities Expose Millions of Vehicles to Remote Code Execution
  • Securing Data in the AI Era
  • Critical Wing FTP Server Vulnerability (CVE-2025-47812) Actively Being Exploited in the Wild
  • Iranian-Backed Pay2Key Ransomware Resurfaces with 80% Profit Share for Cybercriminals
  • CISA Adds Citrix NetScaler CVE-2025-5777 to KEV Catalog as Active Exploits Target Enterprises
  • Critical mcp-remote Vulnerability Enables Remote Code Execution, Impacting 437,000+ Downloads
  • Fake Gaming and AI Firms Push Malware on Cryptocurrency Users via Telegram and Discord
  • Four Arrested in £440M Cyber Attack on Marks & Spencer, Co-op, and Harrods
  • What Security Leaders Need to Know About AI Governance for SaaS

Copyright © TheCyberSecurity.News, All Rights Reserved.