GPT4All: Run Local LLMs on Any Device. Open-source and available for commercial use.
#大语言模型#Ray is an AI compute engine. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
#大语言模型#Gitleaks 是一个开源SAST(静态应用安全测试)命令行工具,用于检测Git 仓库以防止把密码、API 密钥和访问令牌等机密信息硬编码到代码中
#大语言模型#本项目旨在分享大模型相关技术原理以及实战经验(大模型工程化、大模型应用落地)
#大语言模型#20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
#大语言模型#Run any open-source LLMs, such as DeepSeek and Llama, as OpenAI compatible API endpoint in the cloud.
#大语言模型#Official inference library for Mistral models
#自然语言处理#OpenVINO™ is an open source toolkit for optimizing and deploying AI inference
#大语言模型#PowerInfer 是一个快速的、可运行在消费级GPU、个人电脑上的大模型服务
#大语言模型#The easiest way to serve AI apps and models - Build Model Inference APIs, Job queues, LLM apps, Multi-model pipelines, and more!
#大语言模型#LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
#向量搜索引擎#Superduper: End-to-end framework for building custom AI applications and agents.
#计算机科学#Standardized Serverless ML Inference Platform on Kubernetes
📚A curated list of Awesome LLM Inference Papers with Codes.
Eko (Eko Keeps Operating) - Build Production-ready Agentic Workflow with Natural Language - eko.fellou.ai
#大语言模型#Generative AI reference workflows optimized for accelerated infrastructure and microservice architecture.
FlashInfer: Kernel Library for LLM Serving
#自然语言处理#Sparsity-aware deep learning inference runtime for CPUs
#大语言模型#Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs
#大语言模型#Simple, scalable AI model deployment on GPU clusters