• Menu
  • Skip to main content
  • Skip to primary sidebar

The Cyber Security News

Latest Cyber Security News

Header Right

  • Latest News
  • Vulnerabilities
  • Cloud Services
u.s. government releases new ai security guidelines for critical infrastructure

U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

You are here: Home / General Cyber Security News / U.S. Government Releases New AI Security Guidelines for Critical Infrastructure
April 30, 2024

The U.S. government has unveiled new security pointers aimed at bolstering critical infrastructure in opposition to synthetic intelligence (AI)-associated threats.

“These tips are knowledgeable by the entire-of-govt work to evaluate AI hazards across all sixteen critical infrastructure sectors, and handle threats both to and from, and involving AI methods,” the Department of Homeland Security (DHS) said Monday.

In addition, the agency said it can be functioning to facilitate risk-free, liable, and trusted use of the technology in a way that does not infringe on individuals’ privacy, civil rights, and civil liberties.

✔ Approved Seller From Our Partners
Mullvad VPN Discount

Protect your privacy by Mullvad VPN. Mullvad VPN is one of the famous brands in the security and privacy world. With Mullvad VPN you will not even be asked for your email address. No log policy, no data from you will be saved. Get your license key now from the official distributor of Mullvad with discount: SerialCart® (Limited Offer).

➤ Get Mullvad VPN with 12% Discount


The new direction issues the use of AI to augment and scale attacks on critical infrastructure, adversarial manipulation of AI systems, and shortcomings in such instruments that could end result in unintended outcomes, necessitating the want for transparency and safe by structure procedures to consider and mitigate AI pitfalls.

Cybersecurity

Precisely, this spans 4 diverse capabilities such as govern, map, measure, and take care of all as a result of the AI lifecycle –

  • Set up an organizational culture of AI risk administration
  • Have an understanding of your individual AI use context and risk profile
  • Build units to assess, review, and track AI threats
  • Prioritize and act on AI pitfalls to basic safety and security

“Critical infrastructure house owners and operators need to account for their personal sector-particular and context-particular use of AI when examining AI risks and deciding on acceptable mitigations,” the agency stated.

“Critical infrastructure entrepreneurs and operators really should realize where these dependencies on AI vendors exist and get the job done to share and delineate mitigation obligations appropriately.”

The enhancement arrives months immediately after the 5 Eyes (FVEY) intelligence alliance comprising Australia, Canada, New Zealand, the U.K., and the U.S. unveiled a cybersecurity information sheet noting the very careful set up and configuration demanded for deploying AI units.

“The speedy adoption, deployment, and use of AI capabilities can make them highly worthwhile targets for destructive cyber actors,” the governments stated.

“Actors, who have historically applied knowledge theft of sensitive data and mental assets to advance their interests, may perhaps find to co-choose deployed AI systems and use them to malicious ends.”

The encouraged ideal practices include things like taking ways to protected the deployment atmosphere, overview the resource of AI types and provide chain security, make certain a sturdy deployment atmosphere architecture, harden deployment natural environment configurations, validate the AI procedure to guarantee its integrity, defend product weights, implement rigid accessibility controls, perform external audits, and put into practice strong logging.

Previously this month, the CERT Coordination Heart (CERT/CC) specific a shortcoming in the Keras 2 neural network library that could be exploited by an attacker to trojanize a well known AI model and redistribute it, efficiently poisoning the supply chain of dependent apps.

Latest research has discovered AI programs to be susceptible to a vast selection of prompt injection attacks that induce the AI design to circumvent security mechanisms and make destructive outputs.

Cybersecurity

“Prompt injection attacks by means of poisoned material are a major security risk due to the fact an attacker who does this can possibly issue commands to the AI program as if they ended up the user,” Microsoft famous in a new report.

One particular these types of method, dubbed Crescendo, has been explained as a multiturn big language design (LLM) jailbreak, which, like Anthropic’s many-shot jailbreaking, tricks the design into generating destructive information by “inquiring thoroughly crafted issues or prompts that step by step direct the LLM to a preferred outcome, somewhat than asking for the aim all at after.”

LLM jailbreak prompts have become preferred amid cybercriminals seeking to craft productive phishing lures, even as nation-point out actors have started weaponizing generative AI to orchestrate espionage and impact functions.

Even extra concerningly, scientific studies from the College of Illinois Urbana-Champaign has learned that LLM brokers can be set to use to autonomously exploit one-working day vulnerabilities in genuine-planet methods merely using their CVE descriptions and “hack sites, doing responsibilities as advanced as blind databases schema extraction and SQL injections with out human feed-back.”

Identified this posting interesting? Adhere to us on Twitter  and LinkedIn to examine additional exceptional content material we write-up.


Some components of this report are sourced from:
thehackernews.com

Previous Post: «new u.k. law bans default passwords on smart devices starting New U.K. Law Bans Default Passwords on Smart Devices Starting April 2024
Next Post: Millions of Malicious ‘Imageless’ Containers Planted on Docker Hub Over 5 Years millions of malicious 'imageless' containers planted on docker hub over»

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Report This Article

Recent Posts

  • Zero-Click Agentic Browser Attack Can Delete Entire Google Drive Using Crafted Emails
  • Critical XXE Bug CVE-2025-66516 (CVSS 10.0) Hits Apache Tika, Requires Urgent Patch
  • Chinese Hackers Have Started Exploiting the Newly Disclosed React2Shell Vulnerability
  • Intellexa Leaks Reveal Zero-Days and Ads-Based Vector for Predator Spyware Delivery
  • “Getting to Yes”: An Anti-Sales Guide for MSPs
  • CISA Reports PRC Hackers Using BRICKSTORM for Long-Term Access in U.S. Systems
  • JPCERT Confirms Active Command Injection Attacks on Array AG Gateways
  • Silver Fox Uses Fake Microsoft Teams Installer to Spread ValleyRAT Malware in China
  • ThreatsDay Bulletin: Wi-Fi Hack, npm Worm, DeFi Theft, Phishing Blasts— and 15 More Stories
  • 5 Threats That Reshaped Web Security This Year [2025]

Copyright © TheCyberSecurity.News, All Rights Reserved.