Here are
170 public repositories
matching this topic...
AdNauseam: Fight back against advertising surveillance
Updated
Jan 14, 2022
JavaScript
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
Updated
Feb 2, 2022
Python
A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
Updated
Feb 2, 2022
Python
Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models. Advbox give a command line tool to generate adversarial examples with Zero-Coding.
Updated
Jan 13, 2022
Jupyter Notebook
A Toolbox for Adversarial Robustness Research
Updated
Dec 9, 2021
Jupyter Notebook
A pytorch adversarial library for attack and defense methods on images and graphs
Updated
Jan 27, 2022
Python
🗣️ Tool to generate adversarial text examples and test machine learning models against them
Updated
Jan 7, 2022
Python
Implementation of Papers on Adversarial Examples
Updated
Jan 19, 2019
Python
Adversarial attacks and defenses on Graph Neural Networks.
A curated list of awesome resources for adversarial examples in deep learning
Defense-GAN: Protecting Classifiers Against Adversarial Attacks Using Generative Models (published in ICLR2018)
Updated
Oct 24, 2019
Python
DEEPSEC: A Uniform Platform for Security Analysis of Deep Learning Model
Updated
May 21, 2019
Python
Official TensorFlow Implementation of Adversarial Training for Free! which trains robust models at no extra cost compared to natural training.
Updated
Jun 8, 2019
Python
Physical adversarial attack for fooling the Faster R-CNN object detector
Updated
Jan 13, 2020
Jupyter Notebook
[NeurIPS 2020]auto_LiRPA: An Automatic Linear Relaxation based Perturbation Analysis Library for Neural Networks
Updated
Jan 14, 2022
Python
A curated list of papers on adversarial machine learning (adversarial examples and defense methods).
💡 Adversarial attacks on model explanations, and evaluation approaches
PyTorch library for adversarial attack and training
Updated
Jan 16, 2019
Python
[CVPR 2020] When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks
Updated
Oct 21, 2020
Python
Short PhD seminar on Machine Learning Security (Adversarial Machine Learning)
Updated
Oct 25, 2021
Jupyter Notebook
Code for "Detecting Adversarial Samples from Artifacts" (Feinman et al., 2017)
Updated
Feb 14, 2018
Python
A PyTorch Toolbox for creating adversarial examples that fool neural networks.
Updated
Aug 7, 2019
Python
Code for our CVPR 2018 paper, "On the Robustness of Semantic Segmentation Models to Adversarial Attacks"
Updated
Mar 8, 2019
Python
Pre-Training Buys Better Robustness and Uncertainty Estimates (ICML 2019)
Updated
May 15, 2019
Python
Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs
Updated
Apr 2, 2021
Jupyter Notebook
Understanding and Improving Fast Adversarial Training [NeurIPS 2020]
Updated
Sep 23, 2021
Python
Code and data of the ACL 2020 paper "Word-level Textual Adversarial Attacking as Combinatorial Optimization"
Updated
Apr 11, 2021
Python
Certified defense to adversarial examples using CROWN and IBP. Also includes GPU implementation of CROWN verification algorithm (in PyTorch).
Updated
Jun 7, 2021
Python
Patch-wise iterative attack (accepted by ECCV 2020) to improve the transferability of adversarial examples.
Updated
Jan 21, 2022
Python
Improve this page
Add a description, image, and links to the
adversarial-examples
topic page so that developers can more easily learn about it.
Curate this topic
Add this topic to your repo
To associate your repository with the
adversarial-examples
topic, visit your repo's landing page and select "manage topics."
Learn more
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
Both the
GoalFunctionResultandAttackResultabstract classes should provide a meaningful__str__method so that they can be printed in a readable way.