• Menu
  • Skip to main content
  • Skip to primary sidebar

The Cyber Security News

Latest Cyber Security News

Header Right

  • Latest News
  • Vulnerabilities
  • Cloud Services
deepseek ai database exposed: over 1 million log lines, secret

DeepSeek AI Database Exposed: Over 1 Million Log Lines, Secret Keys Leaked

You are here: Home / General Cyber Security News / DeepSeek AI Database Exposed: Over 1 Million Log Lines, Secret Keys Leaked
January 30, 2025

Buzzy Chinese artificial intelligence (AI) startup DeepSeek, which has had a meteoric rise in popularity in recent days, left one of its databases exposed on the internet, which could have allowed malicious actors to gain access to sensitive data.

The ClickHouse database “allows full control over database operations, including the ability to access internal data,” Wiz security researcher Gal Nagli said.

The exposure also includes more than a million lines of log streams containing chat history, secret keys, backend details, and other highly sensitive information, such as API Secrets and operational metadata. DeepSeek has since plugged the security hole following attempts by the cloud security firm to contact them.

✔ Approved From Our Partners
AOMEI Backupper Lifetime

Protect and backup your data using AOMEI Backupper. AOMEI Backupper takes secure and encrypted backups from your Windows, hard drives or partitions. With AOMEI Backupper you will never be worried about loosing your data anymore.

Get AOMEI Backupper with 72% discount from an authorized distrinutor of AOMEI: SerialCart® (Limited Offer).

➤ Activate Your Coupon Code


Cybersecurity

The database, hosted at oauth2callback.deepseek[.]com:9000 and dev.deepseek[.]com:9000, is said to have enabled unauthorized access to a wide range of information. The exposure, Wiz noted, allowed for complete database control and potential privilege escalation within the DeepSeek environment without requiring any authentication.

This involved leveraging ClickHouse’s HTTP interface to execute arbitrary SQL queries directly via the web browser. It’s currently unclear if other malicious actors seized the opportunity to access or download the data.

“The rapid adoption of AI services without corresponding security is inherently risky,” Nagli said in a statement shared with The Hacker News. “While much of the attention around AI security is focused on futuristic threats, the real dangers often come from basic risks—like the accidental external exposure of databases.”

“Protecting customer data must remain the top priority for security teams, and it is crucial that security teams work closely with AI engineers to safeguard data and prevent exposure.”

DeepSeek AI DatabaseDeepSeek AI Database

DeepSeek has become the topic du jour in AI circles for its groundbreaking open-source models that claim to rival leading AI systems like OpenAI, while also being efficient and cost-effective. Its reasoning model R1 has been hailed as “AI’s Sputnik moment.”

The upstart’s AI chatbot has raced to the top of the app store charts across Android and iOS in several markets, even as it has emerged as the target of “large-scale malicious attacks,” prompting it to temporarily pause registrations.

In an update posted on January 29, 2025, the company said it has identified the issue and that it’s working towards implementing a fix.

At the same time, the company has also been at the receiving end of scrutiny about its privacy policies, not to mention its Chinese ties becoming a matter of national security concern for the United States.

Cybersecurity

Furthermore, DeepSeek’s apps became unavailable in Italy shortly after the country’s data protection regulator requested information about its data handling practices and where it obtained its training data. It’s not known if the withdrawal of the apps was in response to questions from the watchdog.

Bloomberg, The Financial Times, and The Wall Street Journal have also reported that both OpenAI and Microsoft are probing whether DeepSeek used OpenAI’s application programming interface (API) without permission to train its own models on the output of OpenAI’s systems, an approach referred to as distillation.

“We know that groups in [China] are actively working to use methods, including what’s known as distillation, to try to replicate advanced US AI models,” an OpenAI spokesperson told The Guardian.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.


Some parts of this article are sourced from:
thehackernews.com

Previous Post: «unpatched php voyager flaws leave servers open to one click rce Unpatched PHP Voyager Flaws Leave Servers Open to One-Click RCE Exploits
Next Post: SOC Analysts – Reimagining Their Role Using AI soc analysts reimagining their role using ai»

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Report This Article

Recent Posts

  • Zero-Click Agentic Browser Attack Can Delete Entire Google Drive Using Crafted Emails
  • Critical XXE Bug CVE-2025-66516 (CVSS 10.0) Hits Apache Tika, Requires Urgent Patch
  • Chinese Hackers Have Started Exploiting the Newly Disclosed React2Shell Vulnerability
  • Intellexa Leaks Reveal Zero-Days and Ads-Based Vector for Predator Spyware Delivery
  • “Getting to Yes”: An Anti-Sales Guide for MSPs
  • CISA Reports PRC Hackers Using BRICKSTORM for Long-Term Access in U.S. Systems
  • JPCERT Confirms Active Command Injection Attacks on Array AG Gateways
  • Silver Fox Uses Fake Microsoft Teams Installer to Spread ValleyRAT Malware in China
  • ThreatsDay Bulletin: Wi-Fi Hack, npm Worm, DeFi Theft, Phishing Blasts— and 15 More Stories
  • 5 Threats That Reshaped Web Security This Year [2025]

Copyright © TheCyberSecurity.News, All Rights Reserved.