Collective Knowledge extension with unified and customizable benchmarks (with extensible JSON meta information) to be easily integrated with customizable and portable Collective Knowledge workflows. You can easily compile and run these benchmarks using different compilers, environments, hardware and OS (Linux, MacOS, Windows, Android). More info:
Collective Knowledge crowd-tuning extension to let users crowdsource their experiments (using portable Collective Knowledge workflows) such as performance benchmarking, auto tuning and machine learning across diverse platforms with Linux, Windows, MacOS and Android provided by volunteers. Demo of DNN crowd-benchmarking and crowd-tuning:
Crowdsourcing video experiments (such as collaborative benchmarking and optimization of DNN algorithms) using Collective Knowledge Framework across diverse Android devices provided by volunteers. Results are continuously aggregated in the open repository:
cBench provides a unified CLI and API to reproduce results from ML&systems research papers on bare-metal platforms and participate in collaborative benchmarking and optimization using live scoreboards. See the real-world example for the MLPerf benchmark:
Cross-platform Python client for the CodeReef.ai portal to manage portable workflows, reusable automation actions, software detection plugins, meta packages and dashboards for crowd-benchmarking:
Development version of CodeReefied portable CK workflows for image classification and object detection. Stable "live" versions are available at CodeReef portal: