该仓库已收录但尚未编辑。项目介绍及使用教程请前往 GitHub 阅读 README
Overview of unsupervised visual representation learning (or self-supervised learning, unsupervised pre-training) methods.
2022-06-19
否
2023-07-12T15:45:48Z
Supervised Contrastive Learning (SupContrast) based on MoCo-v2
#计算机科学#59 篇深度学习论文的实现,并带有详细注释。包括 transformers (original, xl, switch, feedback, vit, ...), optimizers (adam, adabelief, ...), gans(cyclegan, stylegan2, ...), 🎮 强化学习 (ppo, dqn), capsnet, distillation, ... 🧠
Implementation of Vision Transformer, a simple way to achieve SOTA in vision classification with only a single transformer encoder, in Pytorch
Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)
Official PyTorch Implementation of "Scalable Diffusion Models with Transformers"
A summary of recent unsupervised semantic segmentation methods
#自然语言处理#为 Jax、PyTorch 和 TensorFlow 打造的先进的自然语言处理
(NeurIPS 2022) Self-Supervised Visual Representation Learning with Semantic Grouping
0 条讨论