#大语言模型#Private chat with local GPT with document, images, video, etc. 100% private, Apache 2.0. Supports oLLaMa, Mixtral, llama.cpp, and more. Demo: https://gpt.h2o.ai/ https://gpt-docs.h2o.ai/
#自然语言处理#🌸 Run LLMs at home, BitTorrent-style. Fine-tuning and inference up to 10x faster than offloading
#大语言模型#Firefly: 大模型训练工具,支持训练Qwen2.5、Qwen2、Yi1.5、Phi-3、Llama3、Gemma、MiniCPM、Yi、Deepseek、Orion、Xverse、Mixtral-8x7B、Zephyr、Mistral、Baichuan2、Llma2、Llama、Qwen、Baichuan、ChatGLM2、InternLM、Ziya2、Vicuna、Bloom等大模型
#大语言模型#An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)
#大语言模型#A snappy, keyboard-centric terminal user interface for interacting with large language models. Chat with ChatGPT, Claude, Llama 3, Phi 3, Mistral, Gemma and more.
#大语言模型#:metal: TT-NN operator library, and TT-Metalium low level kernel programming model.
#自然语言处理#中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)
#大语言模型#[EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a Versatile Compression Toolkit".
#大语言模型#♾️ Helix is a private GenAI stack for building AI agents with declarative pipelines, knowledge (RAG), API bindings, and first-class testing.
#大语言模型#🏗️ Fine-tune, build, and deploy open-source LLMs easily!
#大语言模型#Like grep but for natural language questions. Based on Mistral 7B or Mixtral 8x7B.
#大语言模型#The official codes for "Aurora: Activating chinese chat capability for Mixtral-8x7B sparse Mixture-of-Experts through Instruction-Tuning"
#自然语言处理#On-device LLM Inference Powered by X-Bit Quantization
#大语言模型#Design, conduct and analyze results of AI-powered surveys and experiments. Simulate social science and market research with large numbers of AI agents and LLMs.
Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).