GitHub 中文社区
回车: Github搜索    Shift+回车: Google搜索
论坛
排行榜
趋势
登录

©2025 GitHub中文社区论坛GitHub官网网站地图GitHub官方翻译

  • X iconGitHub on X
  • Facebook iconGitHub on Facebook
  • Linkedin iconGitHub on LinkedIn
  • YouTube iconGitHub on YouTube
  • Twitch iconGitHub on Twitch
  • TikTok iconGitHub on TikTok
  • GitHub markGitHub’s organization on GitHub
集合主题趋势排行榜
#

linear-attention

Website
Wikipedia
https://static.github-zh.com/github_avatars/BlinkDL?size=40
BlinkDL / RWKV-LM

#大语言模型#RWKV (pronounced RwaKuv) is an RNN with great LLM performance, which can also be directly trained like a GPT transformer (parallelizable). We are at RWKV-7 "Goose". So it's combining the best of RNN a...

attention-mechanism深度学习gptgpt-2gpt-3language-modellinear-attentionlstmPyTorchrnntransformertransformersrwkvChatGPT
Python 13.7 k
5 天前
https://static.github-zh.com/github_avatars/happinesslz?size=40
happinesslz / LION

[NeurIPS 2024] Official code of ”LION: Linear Group RNN for 3D Object Detection in Point Clouds“

3d-object-detectionlinear-attention
Python 180
8 个月前
https://static.github-zh.com/github_avatars/lucidrains?size=40
lucidrains / taylor-series-linear-attention

#计算机科学#Explorations into the recently proposed Taylor Series Linear Attention

人工智能attention-mechanisms深度学习linear-attention
Python 99
10 个月前
https://static.github-zh.com/github_avatars/lucidrains?size=40
lucidrains / agent-attention-pytorch

#计算机科学#Implementation of Agent Attention in Pytorch

人工智能attention-mechanisms深度学习linear-attention
Python 90
1 年前
https://static.github-zh.com/github_avatars/lironui?size=40
lironui / Multi-Attention-Network

The semantic segmentation of remote sensing images

attention-mechanismremote-sensingsemantic-segmentationsegmentationlinear-attention
Python 77
3 年前
https://static.github-zh.com/github_avatars/lironui?size=40
lironui / MAResU-Net

The semantic segmentation of remote sensing images

attention-mechanismattentionlinear-attentionsegmentationsemantic-segmentationremote-sensing
Python 48
3 年前
https://static.github-zh.com/github_avatars/lucidrains?size=40
lucidrains / autoregressive-linear-attention-cuda

#计算机科学#CUDA implementation of autoregressive linear attention, with all the latest research findings

人工智能attention-mechanismsCUDA深度学习linear-attention
Python 44
2 年前
https://static.github-zh.com/github_avatars/BICLab?size=40
BICLab / MetaLA

#大语言模型#Offical implementation of "MetaLA: Unified Optimal Linear Approximation to Softmax Attention Map" (NeurIPS2024 Oral)

linear-attention大语言模型
Python 25
5 个月前
https://static.github-zh.com/github_avatars/glassroom?size=40
glassroom / heinsen_attention

Reference implementation of "Softmax Attention with Constant Cost per Token" (Heinsen, 2024)

attentionattention-mechanismattention-modellinear-attention
Python 24
1 年前
https://static.github-zh.com/github_avatars/gmongaras?size=40
gmongaras / Cottention_Transformer

Code for the paper "Cottention: Linear Transformers With Cosine Attention"

attentionlinear-attentiontransformers
Cuda 17
8 个月前
https://static.github-zh.com/github_avatars/robflynnyh?size=40
robflynnyh / hydra-linear-attention

#计算机科学#Implementation of: Hydra Attention: Efficient Attention with Many Heads (https://arxiv.org/abs/2209.07484)

attentionlinear-attention机器学习transformers
Python 13
2 年前
https://static.github-zh.com/github_avatars/RWKV-Wiki?size=40
RWKV-Wiki / rwkv-wiki.github.io

#计算机科学#RWKV Wiki website (archived, please visit official wiki)

attention-mechanism深度学习gptgpt-2gpt-3language-modellinear-attentionlstmrnnrwkvtransformertransformers
Shell 10
2 年前
https://static.github-zh.com/github_avatars/OSU-STARLAB?size=40
OSU-STARLAB / LeaPformer

[ICML 2024] Official implementation of "LeaPformer: Enabling Linear Transformers for Autoregressive and Simultaneous Tasks via Learned Proportions."

efficiencylanguage-modelinglinear-attentiontransformer-architecture
Python 10
7 个月前
https://static.github-zh.com/github_avatars/gmlwns2000?size=40
gmlwns2000 / sea-attention

Official Implementation of SEA: Sparse Linear Attention with Estimated Attention Mask (ICLR 2024)

attentionlinear-attention
Python 10
3 个月前
https://static.github-zh.com/github_avatars/mtanghu?size=40
mtanghu / LEAP

#计算机科学#LEAP: Linear Explainable Attention in Parallel for causal language modeling with O(1) path length, and O(1) inference

linear-attentionPyTorchtransformersattention-mechanism深度学习parallelrnntransformer
Jupyter Notebook 4
2 年前