POPQORN: Quantifying robustness of recurrent neural networks
The vulnerability to adversarial attacks has been a critical issue for deep neural networks. Addressing this issue requires a reliable way to evaluate the robustness of a network. Recently, several methods have been developed to compute robustness quantification for neural networks, namely, certifie...
Main Authors: | Weng, Tsui-Wei (Author), Daniel, Luca (Author) |
---|---|
Format: | Article |
Language: | English |
Published: |
International Machine Learning Society,
2021-03-04T13:28:23Z.
|
Subjects: | |
Online Access: | Get fulltext |
Similar Items
-
Efficient Neural Network Robustness Certification with General Activation Functions
by: Zhang, Huan, et al.
Published: (2021) -
CNN-Cert: An Efficient Framework for Certifying Robustness of Convolutional Neural Networks
by: Boopathy, Akhilan, et al.
Published: (2021) -
ON EXTENSIONS OF CLEVER: A NEURAL NETWORK ROBUSTNESS EVALUATION ALGORITHM
by: Weng, Tsui-Wei, et al.
Published: (2021) -
Towards verifying robustness of neural networks against a family of semantic perturbations
by: Mohapatra, Jeet, et al.
Published: (2021) -
ON EXTENSIONS OF CLEVER: A NEURAL NETWORK ROBUSTNESS EVALUATION ALGORITHM
by: Weng, Tsui-Wei, et al.
Published: (2022)