#大语言模型#Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPUs. Join our discord: https://discord.gg/5xXzkMu8Zk
An open-source project for Windows developers to learn how to add AI with local models and APIs to Windows apps.
Efficient Inference of Transformer models
#大语言模型#Ollama alternative for Rockchip NPU: An efficient solution for running AI and Deep learning models on Rockchip devices with optimized NPU support ( rkllm )
#大语言模型#Easy installation and usage of Rockchip's NPUs found in RK3588 and similar SoCs
FREE TPU V3plus for FPGA is the free version of a commercial AI processor (EEP-TPU) for Deep Learning EDGE Inference
ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference
hardware design of universal NPU(CNN accelerator) for various convolution neural network
#大语言模型#High-speed and easy-use LLM serving framework for local deployment
An interactive Ascend-NPU process viewer
#安卓#Simplified AI runtime integration for mobile app development