#大语言模型#《李宏毅深度学习教程》(李宏毅老师推荐👍,苹果书🍎),PDF下载地址:https://github.com/datawhalechina/leedl-tutorial/releases
#自然语言处理#NLP 教程,使用Pytorch
#计算机科学#🏄 Scalable embedding, reasoning, ranking for images and sentences with CLIP
#搜索#PaddleNLP 2.0是飞桨生态的文本领域核心库,具备易用的文本领域API,多场景的应用示例、和高性能分布式训练三大特点,旨在提升开发者文本领域的开发效率,并提供基于飞桨2.0核心框架的NLP任务最佳实践。
This repository contains demos I made with the Transformers library by HuggingFace.
#自然语言处理#Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
#自然语言处理#💥 Fast State-of-the-Art Tokenizers optimized for Research and Production
#自然语言处理#大规模中文自然语言处理语料 Large Scale Chinese Corpus for NLP
#大语言模型#vits2 backbone with multilingual-bert
#自然语言处理#BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
#自然语言处理#Leveraging BERT and c-TF-IDF to create easily interpretable topics.
#自然语言处理#Google AI 2018 BERT pytorch implementation
#自然语言处理#Official implementations for various pre-training models of ERNIE-family, covering topics of Language Understanding & Generation, Multimodal Understanding & Generation, and beyond.
Transformer related optimization, including BERT, GPT
#自然语言处理#Awesome Pretrained Chinese NLP Models,高质量中文预训练模型&大模型&多模态模型&大语言模型集合
Tensorflow solution of NER task Using BiLSTM-CRF model with Google BERT Fine-tuning And private Server services
#自然语言处理#Tutorials on getting started with PyTorch and TorchText for sentiment analysis.
#自然语言处理#Must-read papers on prompt-based tuning for pre-trained language models.
中文语言理解测评基准 Chinese Language Understanding Evaluation Benchmark: datasets, baselines, pre-trained models, corpus and leaderboard
#自然语言处理#State of the Art Natural Language Processing