Deploy DL/ ML inference pipelines with minimal extra code.
python
docker
deep-learning
websocket
gunicorn
pytorch
falcon
http-server
triton
gevent
inference-server
tensorflow-serving
streaming-audio
model-deployment
model-serving
serving
tf-serving
torchserve
triton-inference-server
triton-server
-
Updated
Nov 11, 2022 - JavaScript