• Menu
  • Skip to main content
  • Skip to primary sidebar

The Cyber Security News

Latest Cyber Security News

Header Right

  • Latest News
  • Vulnerabilities
  • Cloud Services
taiwan bans deepseek ai over national security concerns, citing data

Taiwan Bans DeepSeek AI Over National Security Concerns, Citing Data Leakage Risks

You are here: Home / General Cyber Security News / Taiwan Bans DeepSeek AI Over National Security Concerns, Citing Data Leakage Risks
February 4, 2025

Taiwan has become the latest country to ban government agencies from using Chinese startup DeepSeek’s Artificial Intelligence (AI) platform, citing security risks.

“Government agencies and critical infrastructure should not use DeepSeek, because it endangers national information security,” according to a statement released by Taiwan’s Ministry of Digital Affairs, per Radio Free Asia.

“DeepSeek AI service is a Chinese product. Its operation involves cross-border transmission, and information leakage and other information security concerns.”

✔ Approved From Our Partners
AOMEI Backupper Lifetime

Protect and backup your data using AOMEI Backupper. AOMEI Backupper takes secure and encrypted backups from your Windows, hard drives or partitions. With AOMEI Backupper you will never be worried about loosing your data anymore.

Get AOMEI Backupper with 72% discount from an authorized distrinutor of AOMEI: SerialCart® (Limited Offer).

➤ Activate Your Coupon Code


DeepSeek’s Chinese origins have prompted authorities from various countries to look into the service’s use of personal data. Last week, it was blocked in Italy, citing a lack of information regarding its data handling practices. Several companies have also prohibited access to the chatbot over similar risks.

The chatbot has captured much of the mainstream attention over the past few weeks for the fact that it’s open source and is as capable as other current leading models, but built at a fraction of the cost of its peers.

Cybersecurity

But the large language models (LLMs) powering the platform have also been found to be susceptible to various jailbreak techniques, a persistent concern in such products, not to mention drawing attention for censoring responses to topics deemed sensitive by the Chinese government.

The popularity of DeepSeek has also led to it being targeted by “large-scale malicious attacks,” with NSFOCUS revealing that it detected three waves of distributed denial-of-service (DDoS) attacks aimed at its API interface between January 25 and 27, 2025.

“The average attack duration was 35 minutes,” it said. “Attack methods mainly include NTP reflection attack and memcached reflection attack.”

It further said the DeepSeek chatbot system was targeted twice by DDoS attacks on January 20, the day on which it launched its reasoning model DeepSeek-R1, and 25 averaged around one-hour using methods like NTP reflection attack and SSDP reflection attack.

The sustained activity primarily originated from the United States, the United Kingdom, and Australia, the threat intelligence firm added, describing it as a “well-planned and organized attack.”

Malicious actors have also capitalized on the buzz surrounding DeepSeek to publish bogus packages on the Python Package Index (PyPI) repository that are designed to steal sensitive information from developer systems. In an ironic twist, there are indications that the Python script was written with the help of an AI assistant.

The packages, named deepseeek and deepseekai, masqueraded as a Python API client for DeepSeek and were downloaded at least 222 times prior to them being taken down on January 29, 2025. A majority of the downloads came from the U.S., China, Russia, Hong Kong, and Germany.

“Functions used in these packages are designed to collect user and computer data and steal environment variables,” Russian cybersecurity company Positive Technologies said. “The author of the two packages used Pipedream, an integration platform for developers, as the command-and-control server that receives stolen data.”

The development comes as the Artificial Intelligence Act went into effect in the European Union starting February 2, 2025, banning AI applications and systems that pose an unacceptable risk and subjecting high-risk applications to specific legal requirements.

In a related move, the U.K. government has announced a new AI Code of Practice that aims to secure AI systems against hacking and sabotage through methods that include security risks from data poisoning, model obfuscation, and indirect prompt injection, as well as ensure they are being developed in a secure manner.

Meta, for its part, has outlined its Frontier AI Framework, noting that it will stop the development of AI models that are assessed to have reached a critical risk threshold and cannot be mitigated. Some of the cybersecurity-related scenarios highlighted include –

  • Automated end-to-end compromise of a best-practice-protected corporate-scale environment (e.g., Fully patched, MFA-protected)
  • Automated discovery and reliable exploitation of critical zero-day vulnerabilities in currently popular, security-best-practices software before defenders can find and patch them
  • Automated end-to-end scam flows (e.g., romance baiting aka pig butchering) that could result in widespread economic damage to individuals or corporations

Cybersecurity

The risk that AI systems could be weaponized for malicious ends is not theoretical. Last week, Google’s Threat Intelligence Group (GTIG) disclosed that over 57 distinct threat actors with ties to China, Iran, North Korea, and Russia have attempted to use Gemini to enable and scale their operations.

Threat actors have also been observed attempting to jailbreak AI models in an effort to bypass their safety and ethical controls. A kind of adversarial attack, it’s designed to induce a model into producing an output that it has been explicitly trained not to, such as creating malware or spelling out instructions for making a bomb.

The ongoing concerns posed by jailbreak attacks have led AI company Anthropic to devise a new line of defense called Constitutional Classifiers that it says can safeguard models against universal jailbreaks.

“These Constitutional Classifiers are input and output classifiers trained on synthetically generated data that filter the overwhelming majority of jailbreaks with minimal over-refusals and without incurring a large compute overhead,” the company said Monday.

Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.


Some parts of this article are sourced from:
thehackernews.com

Previous Post: «amd sev snp vulnerability allows malicious microcode injection with admin access AMD SEV-SNP Vulnerability Allows Malicious Microcode Injection with Admin Access
Next Post: Watch Out For These 8 Cloud Security Shifts in 2025 watch out for these 8 cloud security shifts in 2025»

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Report This Article

Recent Posts

  • New HTTPBot Botnet Launches 200+ Precision DDoS Attacks on Gaming and Tech Sectors
  • Top 10 Best Practices for Effective Data Protection
  • Researchers Expose New Intel CPU Flaws Enabling Memory Leaks and Spectre v2 Attacks
  • Fileless Remcos RAT Delivered via LNK Files and MSHTA in PowerShell-Based Attacks
  • [Webinar] From Code to Cloud to SOC: Learn a Smarter Way to Defend Modern Applications
  • Meta to Train AI on E.U. User Data From May 27 Without Consent; Noyb Threatens Lawsuit
  • Coinbase Agents Bribed, Data of ~1% Users Leaked; $20M Extortion Attempt Fails
  • Pen Testing for Compliance Only? It’s Time to Change Your Approach
  • 5 BCDR Essentials for Effective Ransomware Defense
  • Russia-Linked APT28 Exploited MDaemon Zero-Day to Hack Government Webmail Servers

Copyright © TheCyberSecurity.News, All Rights Reserved.