#大语言模型#Your AI second brain. Self-hostable. Get answers from the web or your docs. Build custom agents, schedule automations, do deep research. Turn any online or local LLM into your personal, autonomous AI ...
#大语言模型#Chat offline with open-source LLMs like deepseek-r1, nemotron, qwen, llama and more all through a simple R package powered by Shiny and Ollama. 🚀
Obrew Studio - Server: A self-hostable machine learning engine. Build agents and schedule workflows private to you.
Claude Deep Research config for Claude Code.
#大语言模型#Attempt to summarize text from `stdin`, using a large language model (locally and offline), to `stdout`
Run Mistral, LLaMA, and DeepSeek locally on Windows with zero setup — no Python required.
Offline AI assistant plugin for Obsidian using encrypted local LLM models.
#大语言模型#A private, free, offline-first chat application powered by Open Source AI models like DeepSeek, Llama, Mistral, etc. through Ollama.
#大语言模型#Ready to deploy Offline LLM AI web chat.
#大语言模型#🍳 Your AI second brain. Self-hostable. Get answers from the web or your docs. Build custom agents, schedule automations, do deep research. Turn any online or local LLM into your personal, autonomous ...
A lightweight local LLM chat with a web UI and a C‑based server that runs any LLM chat executable as a child and communicates via pipes
#大语言模型#Offline AI assistant plugin for Obsidian using encrypted local LLM models.
A lightweight local LLM chat with a web UI and a C‑based server that runs any LLM chat executable as a child and communicates via pipes
Lightweight offline AI assistant for Windows 11 with voice and GUI support. Built with HuggingFace, Tkinter, and DirectML for fast local inference.
A containerized, offline-capable LLM API powered by Ollama. Automatically pulls models and serves them via a REST API. Perfect for homelab, personal AI assistants, and portable deployments.
Optimize your voice AI experience with Faster-Local-Voice-AI. Achieve low-latency STT and TTS on Ubuntu, all offline and fully configurable. 🚀💻