Use your locally running AI models to assist you in your web browsing
#大语言模型#A generalized information-seeking agent system with Large Language Models (LLMs).
Model swapping for llama.cpp (or any local OpenAPI compatible server)
#自然语言处理#[ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization
#自然语言处理#[NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization
The .NET library to build AI systems with 100+ LLM APIs: Anthropic, Azure, Cohere, DeepInfra, DeepSeek, Google, Groq, Mistral, Ollama, OpenAI, OpenRouter, Perplexity, vLLM, Voyage, xAI, and many more!
A nifty little library for working with Ollama in Elixir.
#大语言模型#Run Open Source/Open Weight LLMs locally with OpenAI compatible APIs
The PyVisionAI Official Repo
#大语言模型#MVP of an idea using multiple local LLM models to simulate and play D&D
#大语言模型#MVP of an idea using multiple local LLM models to simulate and play D&D
Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on different ports and loading/unloading them on demand
#大语言模型#Chat with your pdf using your local LLM, OLLAMA client.(incomplete)
#大语言模型#Fenix Ai Trading Bot with crew ai and ollama
#安全#The client for the Symmetry peer-to-peer inference network. Enabling users to connect with each other, share computational resources, and collect valuable machine learning data.
A local chatbot for managing docs