Skip to content
@triton-inference-server

Triton Inference Server

Triton provides a cloud and edge inferencing solution optimized for both CPUs and GPUs. Learn more in https://github.com/triton-inference-server/server.

Pinned

  1. server Public

    The Triton Inference Server provides an optimized cloud and edge inferencing solution.

    Python 3.4k 804

  2. client Public

    Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.

    C++ 107 89

  3. backend Public

    Common source, scripts and utilities for creating Triton backends.

    C++ 76 30

  4. Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Server models.

    Python 115 35

  5. The Triton Model Navigator is a tool that provides the ability to automate the process of model deployment on the Triton Inference Server.

    Python 44 4

Repositories

People

This organization has no public members. You must be a member to see who’s a part of this organization.