Self-hardening firewall for large language models
2023-06-18
否
2024-02-28T06:16:27Z
An easy-to-use Python framework to generate adversarial jailbreak prompts.
Official repo for GPTFUZZER : Red Teaming Large Language Models with Auto-Generated Jailbreak Prompts
Parse Server for Node.js / Express
#自然语言处理#Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)
#大语言模型#a security scanner for custom LLM applications
#大语言模型#The Security Toolkit for LLM Interactions
#大语言模型#LLMs and Machine Learning done easily
DSPy: The framework for programming—not prompting—language models
⚡ Vigil ⚡ Detect prompt injections, jailbreaks, and other potentially risky Large Language Model (LLM) inputs
#面试#A one stop repository for generative AI research updates, interview resources, notebooks and much more!
#搜索#All-in-one platform for search, recommendations, RAG, and analytics offered via API
0 条讨论