#Awesome#This repository is primarily maintained by Omar Santos (@santosomar) and includes thousands of resources related to ethical hacking, bug bounties, digital forensics and incident response (DFIR), artif...
#大语言模型#🐢 Open-Source Evaluation & Testing for AI & LLM systems
A curated list of useful resources that cover Offensive AI.
#计算机科学#A list of backdoor learning resources
#大语言模型#a prompt injection scanner for custom LLM applications
ToolHive makes deploying MCP servers easy, secure and fun
#大语言模型#A security scanner for your LLM agentic workflows
RuLES: a benchmark for evaluating rule-following in language models
#大语言模型#Toolkits to create a human-in-the-loop approval layer to monitor and guide AI agents workflow in real-time.
MCP for Security: A collection of Model Context Protocol servers for popular security tools like SQLMap, FFUF, NMAP, Masscan and more. Integrate security testing and penetration testing into AI workfl...
A curated list of academic events on AI Security & Privacy
Build Secure and Compliant AI agents and MCP Servers. YC W23
[CCS'24] SafeGen: Mitigating Unsafe Content Generation in Text-to-Image Models
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and...
#自然语言处理#Framework for testing vulnerabilities of large language models (LLM).
Reading list for adversarial perspective and robustness in deep reinforcement learning.
The official implementation of the CCS'23 paper, Narcissus clean-label backdoor attack -- only takes THREE images to poison a face recognition dataset in a clean-label way and achieves a 99.89% attack...
Cyber-Security Bible! Theory and Tools, Kali Linux, Penetration Testing, Bug Bounty, CTFs, Malware Analysis, Cryptography, Secure Programming, Web App Security, Cloud Security, Devsecops, Ethical Hack...
Code for "Adversarial attack by dropping information." (ICCV 2021)
#计算机科学#ATLAS tactics, techniques, and case studies data