Microsoft has released an open supply tool to help developers evaluate the security of their equipment mastering units.
The Counterfit venture, now readily available on GitHub, comprises a command-line instrument and generic automation layer to make it possible for builders to simulate cyber attacks in opposition to AI techniques.
Microsoft’s pink group have used Counterfit to examination its own AI models, while the wider organization is also discovering using the instrument in AI advancement.
Any one can obtain the software and deploy it by means of Azure Shell, to operate in-browser, or domestically in an Anaconda Python setting.
It can assess AI designs hosted in several cloud environments, on-premises, or in the edge. Microsoft also promoted its versatility by highlighting the fact that it’s agnostic to AI designs and also supports a range of information styles, which include textual content, illustrations or photos, or generic enter.
“Our resource makes released attack algorithms available to the security group and can help to deliver an extensible interface from which to build, take care of, and launch attacks on AI designs,” Microsoft stated.
“This device is section of broader efforts at Microsoft to empower engineers to securely establish and deploy AI techniques.”
The a few crucial methods that security pros can deploy Counterfit is by pen testing and crimson teaming AI programs, scanning AI methods for vulnerabilities, and logging attacks from AI products.
The software will come preloaded with attack algorithms, though security industry experts can also use the developed-in cmd2 scripting motor to hook into Counterfit from current offensive instruments for screening applications.
Optionally, companies can scan AI units with related attacks any selection of occasions to generate baselines, with continual operates as vulnerabilities are dealt with, helping to evaluate ongoing development.
Microsoft designed the resource out of a need to assess its possess units for vulnerabilities. Counterfit commenced daily life as a handful of attack scripts prepared to target particular person AI styles, and slowly advanced into an automation tool to attack several programs at scale.
The enterprise claims it is engaged with a selection of its partners, customers, and federal government entities in testing the device versus equipment studying models in their personal environments.
Some pieces of this post are sourced from: