Here are
40 public repositories
matching this topic...
In this repository, I will share some useful notes and references about deploying deep learning-based models in production.
Code samples for the Lightbend tutorial on writing microservices with Akka Streams, Kafka Streams, and Kafka
Updated
May 30, 2019
Scala
Common library for serving TensorFlow, XGBoost and scikit-learn models in production.
flink-jpmml is a fresh-made library for dynamic real time machine learning predictions built on top of PMML standard models and Apache Flink streaming engine
Updated
May 9, 2019
Scala
FastAPI Skeleton App to serve machine learning models production-ready.
Updated
Aug 29, 2020
Python
A scalable, high-performance serving system for federated learning models
Updated
Sep 10, 2020
Java
Code and presentation for Strata Model Serving tutorial
Updated
Sep 26, 2019
Scala
An umbrella project for multiple implementations of model serving
Updated
Sep 18, 2017
Scala
fastText model serving service
Updated
Sep 12, 2020
Rust
Updated
Oct 4, 2017
Python
mlserve turns your python models into RESTful API, serves web page with form generated to match your input data.
Updated
Sep 4, 2020
Python
Deploy DL/ ML inference pipelines with minimal extra code.
Updated
Aug 7, 2020
Python
BentoML Example Projects Gallery
Updated
Sep 13, 2020
Jupyter Notebook
Kubeflow example of machine learning/model serving
Updated
Jan 11, 2020
Jupyter Notebook
A collection of model deployment library and technique.
Generic Model Serving Implementation leveraging Flink
Titus 2 : Portable Format for Analytics (PFA) implementation for Python 3.4+
Updated
Apr 19, 2020
Python
Production ready templates for deploying Driverless AI (DAI) scorers.
Updated
Sep 10, 2020
Java
Updated
Jan 6, 2018
Scala
Implementation of Model serving in pipelines
Updated
Nov 11, 2019
Scala
Speculative model serving with Flink
Updated
Sep 24, 2018
Scala
Experimental implementation of speculative model serving
Updated
May 30, 2019
Scala
Tensorflow Serving with Docker / Docker Compose
Updated
Jan 27, 2020
Python
Serving the deep learning models easily.
Updated
Jul 14, 2020
Python
Machine learning logistics and serving platform
Updated
Aug 17, 2020
JavaScript
🍦 Serve doddle-model in a pipeline implemented with Apache Beam
Updated
Nov 19, 2018
Scala
A wiki for discussion of FlinkML concepts.
Serving layer for large machine learning models on Apache Flink
Improve this page
Add a description, image, and links to the
model-serving
topic page so that developers can more easily learn about it.
Curate this topic
Add this topic to your repo
To associate your repository with the
model-serving
topic, visit your repo's landing page and select "manage topics."
Learn more
You can’t perform that action at this time.
You signed in with another tab or window. Reload to refresh your session.
You signed out in another tab or window. Reload to refresh your session.
Is your feature request related to a problem? Please describe.
We have a mechanism to capture logs in production that doesn't require log files collection. However, there is no option to disable local log files generation.
Describe the solution you'd like
A configuration option for user to disable logging to files.
Describe alternatives you've considered
Accept the default beh