A service for autodiscovery and configuration of applications running in containers
-
Updated
Nov 9, 2022 - Go
A service for autodiscovery and configuration of applications running in containers
Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
Playing with the Tigress software protection. Break some of its protections and solve their reverse engineering challenges. Automatic deobfuscation using symbolic execution, taint analysis and LLVM.
Automatic ROPChain Generation
SymGDB - symbolic execution plugin for gdb
(WIP)The deployment framework aims to provide a simple, lightweight, fast integrated, pipelined deployment framework for algorithm service that ensures reliability, high concurrency and scalability of services.
ClearML - Model-Serving Orchestration and Repository Solution
Deploy DL/ ML inference pipelines with minimal extra code.
Hardware-accelerated DNN model inference ROS2 packages using NVIDIA Triton/TensorRT for both Jetson and x86_64 with CUDA-capable GPU
Triton Operating System
Static analysis & deobfuscation framework for x86/x64
COIN Attacks: on Insecurity of Enclave Untrusted Interfaces in SGX - ASPLOS 2020
Binary Ninja plugin that can be used to apply Triton's dead store eliminitation pass on basic blocks or functions.
Symbolic debugging tool using JonathanSalwan/Triton
Deep learning model support for object detection including DetectNet
Three examples of recommendation system pipelines with NVIDIA Merlin and Redis
Add a description, image, and links to the triton topic page so that developers can more easily learn about it.
To associate your repository with the triton topic, visit your repo's landing page and select "manage topics."