Skip to content

apache/tvm

main
Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?
Code

Latest commit

…#14642)

This PR adds a behavior to the MetaSchedule post-processor
RewriteParallelVectorizeUnroll, so that it does not annotate spatial
blocks with the unroll annotation.

This is because the optimization for spatial blocks (standalone in a
GPU kernel, for example) can be done by purely thread binding. As a
result, annotating loop unrolling for spatial blocks does not help.
In some case where the unroll factor is very large (e.g., 512 or 1024),
unrolling the spatial blocks will consume much time during the kernel
compilation and introduces unnecessary overhead.

Therefore, we turn off the behavior of unrolling spatial blocks.
8714656

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
ci
April 13, 2023 16:23
March 10, 2023 17:33
April 9, 2023 12:46
November 28, 2022 19:35
web
April 9, 2023 12:46
July 27, 2022 13:04

Open Deep Learning Compiler Stack

Documentation | Contributors | Community | Release Notes

Build Status WinMacBuild

Apache TVM is a compiler stack for deep learning systems. It is designed to close the gap between the productivity-focused deep learning frameworks, and the performance- and efficiency-focused hardware backends. TVM works with deep learning frameworks to provide end to end compilation to different backends.

License

TVM is licensed under the Apache-2.0 license.

Getting Started

Check out the TVM Documentation site for installation instructions, tutorials, examples, and more. The Getting Started with TVM tutorial is a great place to start.

Contribute to TVM

TVM adopts apache committer model, we aim to create an open source project that is maintained and owned by the community. Check out the Contributor Guide.

Acknowledgement

We learned a lot from the following projects when building TVM.

  • Halide: Part of TVM's TIR and arithmetic simplification module originates from Halide. We also learned and adapted some part of lowering pipeline from Halide.
  • Loopy: use of integer set analysis and its loop transformation primitives.
  • Theano: the design inspiration of symbolic scan operator for recurrence.