Triton Inference Server
Triton provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. Learn more in https://github.com/triton-inference-server/server.
Pinned
Repositories
- dali_backend Public
The Triton backend that allows running GPU-accelerated data pre-processing pipelines implemented in DALI's python API.
-
- model_analyzer Public
Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.
-
- python_backend Public
Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.
-
-
People
This organization has no public members. You must be a member to see who’s a part of this organization.