Loading

该仓库已收录但尚未编辑。项目介绍及使用教程请前往 GitHub 阅读 README


0 条讨论

登录后发表评论

关于

KoCLIP: Korean port of OpenAI CLIP, in Flax

创建时间
是否国产

  修改时间

2023-08-22T03:35:17Z


语言

  • Python98.3%
  • Shell1.6%
  • Makefile0.1%

jaketae 的其他开源项目

Multimodal AI Story Teller, built with Stable Diffusion, GPT, and neural text-to-speech

Python531
2 年前

N-gram keyword extraction using spaCy and pretrained language models

Python55
3 年前

Ensembling Hugging Face transformers made easy

Python48
3 年前

#自然语言处理#PyTorch implementation of FNet: Mixing Tokens with Fourier transforms

Python28
4 年前

您可能感兴趣的

#计算机科学#[AAAI2024] FontDiffuser: One-Shot Font Generation via Denoising Diffusion with Multi-Scale Content Aggregation and Style Contrastive Learning

Python442
2 年前

Fast and memory-efficient exact attention

Python19.75 k
4 天前

Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.

Jupyter Notebook3.13 k
4 个月前

Open-Sora: 完全开源的高效复现类Sora视频生成方案

Python27.24 k
5 个月前

Using Low-rank adaptation to quickly fine-tune diffusion models.

Jupyter Notebook7.44 k
2 年前

Official Pytorch implementation of "Visual Style Prompting with Swapping Self-Attention"

Python459
3 个月前

Official implementation of "ResAdapter: Domain Consistent Resolution Adapter for Diffusion Models".

Python708
1 年前
C++25.11 k
1 年前

Mora: More like Sora for Generalist Video Generation

Python1.57 k
1 年前

#计算机科学#Official implementation for "Break-A-Scene: Extracting Multiple Concepts from a Single Image" [SIGGRAPH Asia 2023]

Python510
2 年前

#大语言模型#Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali

Python2.47 k
25 天前

The simplest, fastest repository for training/finetuning medium-sized GPTs.

Python44.77 k
10 个月前

[CVPR 2024] FRESCO: Spatial-Temporal Correspondence for Zero-Shot Video Translation

Jupyter Notebook778
1 年前

Scene Text Recognition with Permuted Autoregressive Sequence Models (ECCV 2022)

Python663
1 年前

[EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which achieves up to 20x compression with minimal performance loss.

Python5.12 k
7 个月前