An open-source project for Windows developers to learn how to add AI with local models and APIs to Windows apps.
Efficient Inference of Transformer models
#大语言模型#Ollama alternative for Rockchip NPU: An efficient solution for running AI and Deep learning models on Rockchip devices with optimized NPU support ( rkllm )
#大语言模型#Easy usage of Rockchip's NPUs found in RK3588 and similar chips
FREE TPU V3plus for FPGA is the free version of a commercial AI processor (EEP-TPU) for Deep Learning EDGE Inference
hardware design of universal NPU(CNN accelerator) for various convolution neural network
#大语言模型#High-speed and easy-use LLM serving framework for local deployment
#安卓#Simplified AI runtime integration for mobile app development
Run your yolov7 object detection with Rockchip NPU platforms (RK3566, RK3568, RK3588, RK3588S, RV1103, RV1106, RK3562).
Advanced driver-assistance system with Google Coral Edge TPU Dev Board / USB Accelerator, Intel Movidius NCS (neural compute stick), Myriad 2/X VPU, Gyrfalcon 2801 Neural Accelerator, NVIDIA Jetson N...
#大语言模型#EmbeddedLLM: API server for Embedded Device Deployment. Currently support CUDA/OpenVINO/IpexLLM/DirectML/CPU