An open-source project for Windows developers to learn how to add AI with local models and APIs to Windows apps.
Efficient Inference of Transformer models
#大语言模型#Local LLM Server with GPU and NPU Acceleration
#大语言模型#Ollama alternative for Rockchip NPU: An efficient solution for running AI and Deep learning models on Rockchip devices with optimized NPU support ( rkllm )
#大语言模型#Easy installation and usage of Rockchip's NPUs found in RK3588 and similar SoCs
FREE TPU V3plus for FPGA is the free version of a commercial AI processor (EEP-TPU) for Deep Learning EDGE Inference
ONNXim is a fast cycle-level simulator that can model multi-core NPUs for DNN inference
hardware design of universal NPU(CNN accelerator) for various convolution neural network
#大语言模型#High-speed and easy-use LLM serving framework for local deployment
An interactive Ascend-NPU process viewer
#安卓#Simplified AI runtime integration for mobile app development
Run your yolov7 object detection with Rockchip NPU platforms (RK3566, RK3568, RK3588, RK3588S, RV1103, RV1106, RK3562).