#大语言模型#Mastering Applied AI, One Concept at a Time
Official repository of my book "A Hands-On Guide to Fine-Tuning LLMs with PyTorch and Hugging Face"
#大语言模型#End to End Generative AI Industry Projects on LLM Models with Deployment_Awesome LLM Projects
#计算机科学#Train Large Language Models on MLX.
#大语言模型#Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).
#大语言模型#🚀 Easy, open-source LLM finetuning with one-line commands, seamless cloud integration, and popular optimization frameworks. ✨
Deploy any AI model, agents, database, RAG, and pipeline locally in minutes
#大语言模型#On Memorization of Large Language Models in Logical Reasoning
IndexTTS Fine-tuning notebooks
#大语言模型#Fine-tune any Hugging Face LLM or VLM on day-0 using PyTorch-native features for GPU-accelerated distributed training with superior performance and memory efficiency.
MediNotes: SOAP Note Generation through Ambient Listening, Large Language Model Fine-Tuning, and RAG
A Gradio web UI for Large Language Models. Supports LoRA/QLoRA finetuning,RAG(Retrieval-augmented generation) and Chat
#大语言模型#Fine-tune Mistral 7B to generate fashion style suggestions
A Streamlit app for generating high-quality Q&A training datasets from text and PDFs, leveraging Gemini, Claude, and OpenAI for LLM fine-tuning.
the small distributed language model toolkit; fine-tune state-of-the-art LLMs anywhere, rapidly
#大语言模型#[NeurIPS 2023] Make Your Pre-trained Model Reversible: From Parameter to Memory Efficient Fine-Tuning
#大语言模型#Medical Language Model fine-tuned using pretraining, instruction tuning, and Direct Preference Optimization (DPO). Progresses from general medical knowledge to specific instruction following, with exp...
SEIKO is a novel reinforcement learning method to efficiently fine-tune diffusion models in an online setting. Our methods outperform all baselines (PPO, classifier-based guidance, direct reward backp...