NVIDIA Corporation is a company that manufactures graphics processors, mobile technologies, and desktop computers. It is known for developing integrated circuits, which are used in everything from electronic game consoles to personal computers (PCs). The company is a leading manufacturer of high-end graphics processing units (GPUs).
Created by Jensen Huang, Curtis Priem, Chris Malachowsky
发布于 April 5, 1993
#计算机科学#Machine Learning Containers for NVIDIA Jetson and JetPack-L4T
#计算机科学#Unofficial implementation of "Image Inpainting for Irregular Holes Using Partial Convolutions". Try at: www.fixmyphoto.ai
cuCIM - RAPIDS GPU-accelerated image processing library
Home for cuQuantum Python & NVIDIA cuQuantum SDK C++ samples
#计算机科学#Deep Learning Autonomous Car based on Raspberry Pi, SunFounder PiCar-V Kit, TensorFlow, and Google's EdgeTPU Co-Processor
AWS virtual gpu device plugin provides capability to use smaller virtual gpus for your machine learning inference workloads
#计算机科学#📁 This repository hosts a growing collection of AI blueprint projects that run end-to-end using Jupyter notebooks, MLflow deployments, and Streamlit web apps.🛠️ All projects are built using HP AI St...
✨ 1-Click Free GPU on VS Code with Google Colab
Multispeaker & Emotional TTS based on Tacotron 2 and Waveglow
#计算机科学#Deploy stable diffusion model with onnx/tenorrt + tritonserver
#计算机科学#A nvImageCodec library of GPU- and CPU- accelerated codecs featuring a unified interface
#人脸识别#yolov5, yolov8, segmenations, face, pose, keypoints on deepstream
#大语言模型#lm-scratch-pytorch - The code is designed to be beginner-friendly, with a focus on understanding the fundamentals of PyTorch and implementing LLMs from scratch,step by step.
Self-driving AI toy car 🤖🚗.
NVIDIA BioNeMo blueprint for generative AI-based virtual screening
Speaker identification/verification models for Machine Learning for Computer Vision class at UNIBO
#时序数据库#Rapid large-scale fractional differencing with NVIDIA RAPIDS and GPU to minimize memory loss while making a time series stationary. 6x-400x speed up over CPU implementation.
#计算机科学#Nvidia DLI workshop on AI-based anomaly detection techniques using GPU-accelerated XGBoost, deep learning-based autoencoders, and generative adversarial networks (GANs) and then implement and compare ...