GitHub 中文社区
回车: Github搜索    Shift+回车: Google搜索
论坛
排行榜
趋势
登录

©2025 GitHub中文社区论坛GitHub官网网站地图GitHub官方翻译

  • X iconGitHub on X
  • Facebook iconGitHub on Facebook
  • Linkedin iconGitHub on LinkedIn
  • YouTube iconGitHub on YouTube
  • Twitch iconGitHub on Twitch
  • TikTok iconGitHub on TikTok
  • GitHub markGitHub’s organization on GitHub
集合主题趋势排行榜
#

p-tuning

Website
Wikipedia
https://static.github-zh.com/github_avatars/PhoebusSi?size=40
PhoebusSi / Alpaca-CoT

#大语言模型#We unified the interfaces of instruction-tuning data (e.g., CoT data), multiple LLMs and parameter-efficient methods (e.g., lora, p-tuning) together for easy use. We welcome open-source enthusiasts to...

chatglmllama大语言模型loraChatGPTcotinstruction-tuningalpacamossp-tuningPyTorchtabular-data
Jupyter Notebook 2.75 k
2 年前
https://static.github-zh.com/github_avatars/liucongg?size=40
liucongg / ChatGLM-Finetuning

#大语言模型#基于ChatGLM-6B、ChatGLM2-6B、ChatGLM3-6B模型,进行下游具体任务微调,涉及Freeze、Lora、P-tuning、全参微调等

chatglmChatGPTfreezelorap-tuningchatglm2chatglm3
Python 2.75 k
2 年前
https://static.github-zh.com/github_avatars/THUDM?size=40
THUDM / P-tuning-v2

#自然语言处理#An optimized deep prompt tuning strategy comparable to fine-tuning across scales and tasks

自然语言处理prompt-tuningpretrained-language-modelp-tuningparameter-efficient-learning
Python 2.04 k
2 年前
https://static.github-zh.com/github_avatars/THUDM?size=40
THUDM / P-tuning

#自然语言处理#A novel method to tune language models. Codes and datasets for paper ``GPT understands, too''.

自然语言处理pre-trained-language-modelsprompt-tuningp-tuningparameter-efficient-learningfew-shot-learning
Python 934
3 年前
https://static.github-zh.com/github_avatars/yuanjie-ai?size=40
yuanjie-ai / ChatLLM

#大语言模型#轻松玩转LLM兼容openai&langchain,支持文心一言、讯飞星火、腾讯混元、智谱ChatGLM等

ChatGPT大语言模型gpt4langchainlorap-tuningchatllmchatpdfchatdoc
Jupyter Notebook 445
9 个月前
https://static.github-zh.com/github_avatars/openhackathons-org?size=40
openhackathons-org / End-to-End-LLM

#自然语言处理#This repository is an AI Bootcamp material that consist of a workflow for LLM

深度学习自然语言处理p-tuningprompt-tuning大语言模型question-answeringtensorrt-llmgenai
Jupyter Notebook 90
2 个月前
https://static.github-zh.com/github_avatars/FreedomIntelligence?size=40
FreedomIntelligence / DPTDR

Code for COLING22 paper, DPTDR: Deep Prompt Tuning for Dense Passage Retrieval

prompt-tuninginformation-retrievalp-tuningprompt-learningquestion-answering
Python 25
2 年前
https://static.github-zh.com/github_avatars/bugface?size=40
bugface / P-tuning-v2-MRC-NER

P-tuning-v2 integrated mrc for ner

nerp-tuningPyTorch
Python 3
2 年前
https://static.github-zh.com/github_avatars/HROlive?size=40
HROlive / Poland-End-To-End-LLM-Bootcamp

#大语言模型#This bootcamp is designed to give NLP researchers an end-to-end overview on the fundamentals of NVIDIA NeMo framework, complete solution for building large language models. It will also have hands-on ...

gptllama2大语言模型llm-inferencellm-trainingNvidiap-tuningprompt-tuningtensorrttriton
Jupyter Notebook 2
1 年前
https://static.github-zh.com/github_avatars/avnlp?size=40
avnlp / llm-finetuning

fine-tuningsftlorapeftqlorap-tuning
Python 2
4 个月前
https://static.github-zh.com/github_avatars/yuchengml?size=40
yuchengml / Adaptation-Tuning-PEFT

Comparison of different adaptation methods on PEFT for fine-tuning downstream tasks or benchmarks.

huggingface-transformersp-tuningpefttransformerswandblora
Python 1
1 年前
https://static.github-zh.com/github_avatars/NJUxlj?size=40
NJUxlj / p-tuning-v2-reproduce

#大语言模型#Reproduce a prompt-learning method: P-Tuning V2, from the paper 《P-Tuning v2: Prompt Tuning Can Be Comparable to Fine-tuning Universally Across Scales and Tasks》, model usage: Deberta + ChatGLM2, addi...

chatglm2-6b大语言模型p-tuningprompt-learning
Python 0
1 个月前