Interpretability and explainability of data and machine learning models
2019-07-11
否
2025-02-26T03:13:04Z
#计算机科学#Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
#计算机科学#A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
#Awesome#A collection of great digital IC project/tutorial/website etc..
#计算机科学#Fit interpretable models. Explain blackbox machine learning.
#大语言模型#🚀🚀🚀 This repository lists some awesome public CUDA, cuda-python, cuBLAS, cuDNN, CUTLASS, TensorRT, TensorRT-LLM, Triton, TVM, MLIR, PTX and High Performance Computing (HPC) projects.
#计算机科学#Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams
#计算机科学#scikit-learn 是基于 SciPy、NumPy、matplotlib 构建的 Python 机器学习框架
List of awesome open source hardware tools, generators, and reusable designs
A curated list of awesome HDL, libraries, typical implementation and references.
#计算机科学#UpTrain is an open-source unified platform to evaluate and improve Generative AI applications. We provide grades for 20+ preconfigured checks (covering language, code, embedding use-cases), perform ro...
0 条讨论