Here, malicious attacks on the data used for training, attacks on algorithms and specific methods of their implementation, as well as on the entire infrastructure supporting the functioning of artificial intelligence systems (services) are possible. It is in this context that we talk about adversarial machine learning.
We give a discount of 60% on courses from india email list GeekBrains until January 26
In just 9 months you will be able to get a job with an income of 150,000 rubles
Book a discount
Thus, ensuring the cybersecurity of artificial intelligence itself is a separate complex task that is rapidly gaining relevance. And here the shortage of specialists is felt most acutely, since even at the level of approaches to its solution and standards, the first signs are just beginning to appear. For example, only in January 2024, the NIST AI 100-2e2023 standard "Adversarial Machine Learning: A Taxonomy and Terminology of Attacks and Mitigations" on the terminology and classification of attacks was released in the USA. And the expected international standard ISO / IEC CD 27090 "Cybersecurity. Artificial Intelligence. Guidance for addressing security threats and failures in artificial intelligence systems" is still at the development stage.
Approaches to Standardizing Threats to AI
-
- Posts: 487
- Joined: Sun Dec 22, 2024 8:30 am