A curated list of awesome papers related to pre-trained models for information retrieval (a.k.a., pretraining for IR).
#自然语言处理#On Transferability of Prompt Tuning for Natural Language Processing
FusionDTI utilises a Token-level Fusion module to effectively learn fine-grained information for Drug-Target Interaction Prediction.
#自然语言处理#The code for the ACL 2023 paper "Linear Classifier: An Often-Forgotten Baseline for Text Classification".
Code for the paper "Exploiting Pretrained Biochemical Language Models for Targeted Drug Design", to appear in Bioinformatics, Proceedings of ECCB2022.
The official repository for AAAI 2024 Oral paper "Structured Probabilistic Coding".
#自然语言处理#A Keras-based and TensorFlow-backend NLP Models Toolkit.
This research examines the performance of Large Language Models (GPT-3.5 Turbo and Gemini 1.5 Pro) in Bengali Natural Language Inference, comparing them with state-of-the-art models using the XNLI dat...
#自然语言处理#Identified ADEs and associated terms in an annotated corpus with Named Entity Recognition (NER) modeling with Flair and PyTorch. Fine-tuned pre-trained transformer models such as XLM-RoBERTa, SpanBERT...
A python tool for evaluating the quality of few-shot prompt learning.
LSTM models for text classification on character embeddings.
#自然语言处理#Fine tuned BERT, mBERT and XLMRoBERTa for Abusive Comments Detection in Telugu, Code-Mixed Telugu and Telugu-English.
The code of An Empirical Study of Pre-trained Language Models in Simple Knowledge Graph Question Answering
#自然语言处理#Codebase to reproduce the submission of team CompLx for sub-task 2 of the 2022 FinSim4-ESG shared task