ncnn is a high-performance neural network inference framework optimized for the mobile platform
-
Updated
May 31, 2023 - C++
ncnn is a high-performance neural network inference framework optimized for the mobile platform
Concrete: TFHE Compiler that converts python programs into FHE equivalent
SHARK - High Performance Machine Learning Distribution
BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.
MegCC是一个运行时超轻量,高效,移植简单的深度学习模型编译器
Highly optimized inference engine for Binarized Neural Networks
VAST is an experimental compiler pipeline designed for program analysis of C and C++. It provides a tower of IRs as MLIR dialects to choose the best fit representations for a program analysis or further program abstraction.
Play with MLIR right in your browser
C++ compiler for heterogeneous quantum-classical computing built on Clang and XACC
Add a description, image, and links to the mlir topic page so that developers can more easily learn about it.
To associate your repository with the mlir topic, visit your repo's landing page and select "manage topics."