CUDA
CUDA® is a parallel computing platform and programming model developed by NVIDIA for general computing on graphical processing units (GPUs). With CUDA, developers are able to dramatically speed up computing applications by harnessing the power of GPUs.
Here are 4,200 public repositories matching this topic...
Build and run Docker containers leveraging NVIDIA GPUs
-
Updated
Apr 26, 2023 - Makefile
kaldi-asr/kaldi is the official location of the Kaldi project.
-
Updated
May 5, 2023 - Shell
Instant neural graphics primitives: lightning fast NeRF and more
-
Updated
May 3, 2023 - Cuda
Open3D: A Modern Library for 3D Data Processing
-
Updated
May 10, 2023 - C++
A flexible framework of neural networks for deep learning
-
Updated
Oct 17, 2022 - Python
Go package for computer vision using OpenCV 4 and beyond.
-
Updated
May 6, 2023 - Go
OneFlow is a deep learning framework designed to be user-friendly, scalable and efficient.
-
Updated
May 11, 2023 - C++
Containers for machine learning
-
Updated
May 8, 2023 - Python
Tengine is a lite, high performance, modular inference engine for embedded device
-
Updated
Mar 5, 2023 - C++
ArrayFire: a general purpose GPU library.
-
Updated
May 11, 2023 - C++
A PyTorch Library for Accelerating 3D Deep Learning Research
-
Updated
May 9, 2023 - Python
Samples for CUDA Developers which demonstrates features in CUDA Toolkit
-
Updated
Apr 6, 2023 - C
cuML - RAPIDS Machine Learning Library
-
Updated
May 11, 2023 - C++
HIP: C++ Heterogeneous-Compute Interface for Portability
-
Updated
May 11, 2023 - C++
Created by Nvidia
Released June 23, 2007
- Followers
- 155 followers
- Website
- developer.nvidia.com/cuda-zone
- Wikipedia
- Wikipedia