#计算机科学#Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models....
#计算机科学#A Toolbox for Adversarial Robustness Research
#计算机科学#Raising the Cost of Malicious AI-Powered Image Editing
#计算机科学#PhD/MSc course on Machine Learning Security (Univ. Cagliari)
Physical adversarial attack for fooling the Faster R-CNN object detector
This repository contains the implementation of three adversarial example attack methods FGSM, IFGSM, MI-FGSM and one Distillation as defense against all attacks using MNIST dataset.
#计算机科学#Provably defending pretrained classifiers including the Azure, Google, AWS, and Clarifai APIs
#计算机科学#Adversarial Attacks on Deep Neural Networks for Time Series Classification
Task-agnostic universal black-box attacks on computer vision neural network via procedural noise (CCS'19)
#计算机科学#Randomized Smoothing of All Shapes and Sizes (ICML 2020).
#计算机科学#Randomized Smoothing of All Shapes and Sizes (ICML 2020).
Implements Adversarial Examples for Semantic Segmentation and Object Detection, using PyTorch and Detectron2
Universal Adversarial Perturbations (UAPs) for PyTorch
Implementation of "Adversarial Frontier Stitching for Remote Neural Network Watermarking" in TensorFlow.
Shows how to create basic image adversaries, and train adversarially robust image classifiers (to some extent).
📄 [Talk] OFFZONE 2022 / ODS Data Halloween 2022: Black-box attacks on ML models + with use of open-source tools
Taller de Adversarial Machine Learning
GeoAdEx: A geometric approach for finding minimum-norm adversarial examples on k-NN classifiers