#计算机科学#Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
#自然语言处理#Data augmentation for NLP
#自然语言处理#TextAttack 🐙 is a Python framework for adversarial attacks, data augmentation, and model training in NLP https://textattack.readthedocs.io/en/master/
#计算机科学#A Python toolbox to create adversarial examples that fool neural networks in PyTorch, TensorFlow, and JAX
#大语言模型#A unified evaluation framework for large language models
#计算机科学#PyTorch implementation of adversarial attacks [torchattacks]
#自然语言处理#Must-read Papers on Textual Adversarial Attack and Defense
#自然语言处理#A reading list for large models safety, security, and privacy (including Awesome LLM Security, Safety, etc.).
#计算机科学#Advbox is a toolbox to generate adversarial examples that fool neural networks in PaddlePaddle、PyTorch、Caffe2、MxNet、Keras、TensorFlow and Advbox can benchmark the robustness of machine learning models....
#计算机科学#A Toolbox for Adversarial Robustness Research
#计算机科学#A pytorch adversarial library for attack and defense methods on images and graphs
#时序数据库#A collection of anomaly detection methods (iid/point-based, graph and time series) including active learning for anomaly detection/discovery, bayesian rule-mining, description for diversity/explanatio...
#Awesome#A curated list of adversarial attacks and defenses papers on graph-structured data.
#自然语言处理#An Open-Source Package for Textual Adversarial Attack.
Code relative to "Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks"
A Harder ImageNet Test Set (CVPR 2021)
#计算机科学#Raising the Cost of Malicious AI-Powered Image Editing
#自然语言处理#A Model for Natural Language Attack on Text Classification and Inference
A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.