• Menu
  • Skip to main content
  • Skip to primary sidebar

The Cyber Security News

Latest Cyber Security News

Header Right

  • Latest News
  • Vulnerabilities
  • Cloud Services
u.s. government releases new ai security guidelines for critical infrastructure

U.S. Government Releases New AI Security Guidelines for Critical Infrastructure

You are here: Home / General Cyber Security News / U.S. Government Releases New AI Security Guidelines for Critical Infrastructure
April 30, 2024

The U.S. government has unveiled new security pointers aimed at bolstering critical infrastructure in opposition to synthetic intelligence (AI)-associated threats.

“These tips are knowledgeable by the entire-of-govt work to evaluate AI hazards across all sixteen critical infrastructure sectors, and handle threats both to and from, and involving AI methods,” the Department of Homeland Security (DHS) said Monday.

In addition, the agency said it can be functioning to facilitate risk-free, liable, and trusted use of the technology in a way that does not infringe on individuals’ privacy, civil rights, and civil liberties.

✔ Approved From Our Partners
AOMEI Backupper Lifetime

Protect and backup your data using AOMEI Backupper. AOMEI Backupper takes secure and encrypted backups from your Windows, hard drives or partitions. With AOMEI Backupper you will never be worried about loosing your data anymore.

Get AOMEI Backupper with 72% discount from an authorized distrinutor of AOMEI: SerialCart® (Limited Offer).

➤ Activate Your Coupon Code


The new direction issues the use of AI to augment and scale attacks on critical infrastructure, adversarial manipulation of AI systems, and shortcomings in such instruments that could end result in unintended outcomes, necessitating the want for transparency and safe by structure procedures to consider and mitigate AI pitfalls.

Cybersecurity

Precisely, this spans 4 diverse capabilities such as govern, map, measure, and take care of all as a result of the AI lifecycle –

  • Set up an organizational culture of AI risk administration
  • Have an understanding of your individual AI use context and risk profile
  • Build units to assess, review, and track AI threats
  • Prioritize and act on AI pitfalls to basic safety and security

“Critical infrastructure house owners and operators need to account for their personal sector-particular and context-particular use of AI when examining AI risks and deciding on acceptable mitigations,” the agency stated.

“Critical infrastructure entrepreneurs and operators really should realize where these dependencies on AI vendors exist and get the job done to share and delineate mitigation obligations appropriately.”

The enhancement arrives months immediately after the 5 Eyes (FVEY) intelligence alliance comprising Australia, Canada, New Zealand, the U.K., and the U.S. unveiled a cybersecurity information sheet noting the very careful set up and configuration demanded for deploying AI units.

“The speedy adoption, deployment, and use of AI capabilities can make them highly worthwhile targets for destructive cyber actors,” the governments stated.

“Actors, who have historically applied knowledge theft of sensitive data and mental assets to advance their interests, may perhaps find to co-choose deployed AI systems and use them to malicious ends.”

The encouraged ideal practices include things like taking ways to protected the deployment atmosphere, overview the resource of AI types and provide chain security, make certain a sturdy deployment atmosphere architecture, harden deployment natural environment configurations, validate the AI procedure to guarantee its integrity, defend product weights, implement rigid accessibility controls, perform external audits, and put into practice strong logging.

Previously this month, the CERT Coordination Heart (CERT/CC) specific a shortcoming in the Keras 2 neural network library that could be exploited by an attacker to trojanize a well known AI model and redistribute it, efficiently poisoning the supply chain of dependent apps.

Latest research has discovered AI programs to be susceptible to a vast selection of prompt injection attacks that induce the AI design to circumvent security mechanisms and make destructive outputs.

Cybersecurity

“Prompt injection attacks by means of poisoned material are a major security risk due to the fact an attacker who does this can possibly issue commands to the AI program as if they ended up the user,” Microsoft famous in a new report.

One particular these types of method, dubbed Crescendo, has been explained as a multiturn big language design (LLM) jailbreak, which, like Anthropic’s many-shot jailbreaking, tricks the design into generating destructive information by “inquiring thoroughly crafted issues or prompts that step by step direct the LLM to a preferred outcome, somewhat than asking for the aim all at after.”

LLM jailbreak prompts have become preferred amid cybercriminals seeking to craft productive phishing lures, even as nation-point out actors have started weaponizing generative AI to orchestrate espionage and impact functions.

Even extra concerningly, scientific studies from the College of Illinois Urbana-Champaign has learned that LLM brokers can be set to use to autonomously exploit one-working day vulnerabilities in genuine-planet methods merely using their CVE descriptions and “hack sites, doing responsibilities as advanced as blind databases schema extraction and SQL injections with out human feed-back.”

Identified this posting interesting? Adhere to us on Twitter  and LinkedIn to examine additional exceptional content material we write-up.


Some components of this report are sourced from:
thehackernews.com

Previous Post: «new u.k. law bans default passwords on smart devices starting New U.K. Law Bans Default Passwords on Smart Devices Starting April 2024
Next Post: Millions of Malicious ‘Imageless’ Containers Planted on Docker Hub Over 5 Years millions of malicious 'imageless' containers planted on docker hub over»

Reader Interactions

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Primary Sidebar

Report This Article

Recent Posts

  • Discord Invite Link Hijacking Delivers AsyncRAT and Skuld Stealer Targeting Crypto Wallets
  • Over 269,000 Websites Infected with JSFireTruck JavaScript Malware in One Month
  • Ransomware Gangs Exploit Unpatched SimpleHelp Flaws to Target Victims with Double Extortion
  • CTEM is the New SOC: Shifting from Monitoring Alerts to Measuring Risk
  • Apple Zero-Click Flaw in Messages Exploited to Spy on Journalists Using Paragon Spyware
  • WordPress Sites Turned Weapon: How VexTrio and Affiliates Run a Global Scam Network
  • New TokenBreak Attack Bypasses AI Moderation with Single-Character Text Changes
  • AI Agents Run on Secret Accounts — Learn How to Secure Them in This Webinar
  • Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction
  • Non-Human Identities: How to Address the Expanding Security Risk

Copyright © TheCyberSecurity.News, All Rights Reserved.