[EXT] RE: Proposed new CWE: Machine learning classifier vulnerable to adversarial inputs (adversarial machine learning)

classic Classic list List threaded Threaded
2 messages Options
Reply | Threaded
Open this post in threaded view
|

[EXT] RE: Proposed new CWE: Machine learning classifier vulnerable to adversarial inputs (adversarial machine learning)

Wheeler, David A
FYI, here's another article about attacks on artificial intelligence (AI):

"To cripple AI, hackers are turning data against itself"
("Data has powered the artificial intelligence revolution. Now security experts are uncovering worrying ways in which AIs can be hacked to go rogue") by Nicole Kobie Tuesday 11 September 2018 https://www.wired.co.uk/article/artificial-intelligence-hacking-machine-learning-adversarial

--- David A. Wheeler
Reply | Threaded
Open this post in threaded view
|

[EXT] RE: Proposed new CWE: Machine learning classifier vulnerable to adversarial inputs (adversarial machine learning)

Joe Jarzombek
I agree that the hacking of AI is an emerging security crisis, and many will agree that hackers are on the brink of launching a wave of AI attacks.  
 
Adversarial ML could certainly help to better specify criteria for new CWE IDs and CAPEC IDs.  Specifying deterministic criteria for characterizing the associated weaknesses and attack patterns might be some of the most valuable contributions of this research community.

We could certainly specify new CAPEC IDs to characterize the spectrum of attack patterns depending on phases of ML model generation, such as training time attack or inference time attack.  Knowing that ‘changes to the data from which ML systems are taught could also lead to biases being actively added to the decisions AI systems make’ could be specified as a CAPEC ID; yet detecting such actions (and characterizing the ‘source vector’ of attack) could be challenging.

The CWE/CAPEC research community could help the AI/ML research community by specifying new CWE IDs and CAPEC IDs that are broken out by white box access and black-box attacks.  In order for the AI/ML community to devise appropriate defenses that eliminate or at least mitigate risks attributable to the existence of adversarial examples, they could greatly benefit from standardized specification of associated weaknesses and attack patterns.

As indicated in the referenced article: "When designing the machine learning systems, it is important to be aware of and possibly mitigate the specific risks of adversarial attacks, rather than blindly design the system and worry about repercussion if they happen."

New CWE IDs and CAPEC IDs associated with ML would certainly contribute to the advancement of safe and security AI/ML.

Regards,

   -Joe -

Joe Jarzombek, CSSLP 
Director for Government, Aerospace & Defense Programs
Email: [hidden email]  |  Mobile: 703 627-4644  |
https://www.synopsys.com/solutions/aerospace-defense.html


-----Original Message-----
From: Wheeler, David A [mailto:[hidden email]]
Sent: Sunday, January 20, 2019 10:34 PM
To: [hidden email]
Subject: [EXT] RE: Proposed new CWE: Machine learning classifier vulnerable to adversarial inputs (adversarial machine learning)

FYI, here's another article about attacks on artificial intelligence (AI):

"To cripple AI, hackers are turning data against itself"
("Data has powered the artificial intelligence revolution. Now security experts are uncovering worrying ways in which AIs can be hacked to go rogue") by Nicole Kobie Tuesday 11 September 2018 https://urldefense.proofpoint.com/v2/url?u=https-3A__www.wired.co.uk_article_artificial-2Dintelligence-2Dhacking-2Dmachine-2Dlearning-2Dadversarial&d=DwIGaQ&c=DPL6_X_6JkXFx7AXWqB0tg&r=uxVVCWR8UcUyeakFZXcD5bxiMa6Nx1hPouclYdJYz00&m=IW2CdSmLAMgbHOlIlns7h9vD9dTESsH_E1mgi_tp_Aw&s=SrwDYjOeo7ksR97x7u7bds27N3Z-g04JPOUMKAcyrNY&e=

--- David A. Wheeler