Skip to content
#

mlir

Here are 36 public repositories matching this topic...

seldridge
seldridge commented Apr 29, 2022

We are missing an optimization to build reduction ops when possible. Consider the following which is doing b[0] | b[1]:

module {
  hw.module @Foo(%a: i1, %b: i2) -> (c: i1) {
    %0 = comb.extract %b from 1 : (i2) -> i1
    %1 = comb.extract %b from 0 : (i2) -> i1
    %2 = comb.or %0, %1 {sv.namehint = "_b"} : i1
    hw.output %2 : i1
  }
}

This produces:

enhancement good first issue Comb
torch-mlir
cathyzhyi
cathyzhyi commented Mar 9, 2022

The existing code assumes the result tensor type is the same as input type for a few aten ops like log, exp, erf. See https://github.com/llvm/torch-mlir/blob/486f95e84f587d020ba789b071b12f890510f1a1/lib/Dialect/Torch/Transforms/RefineTypes.cpp#L221-L235
This incorrect. The result tensor of these ops should always have the default dtype rather than the same as the input type. E2E tests fo

good first issue help wanted
hanchenye
hanchenye commented Oct 17, 2021

In test/create-cores/test_dma1.mlir, -aie-lower-memcpy convert

  AIE.memcpy @token0(1, 2) (%t11 : <%buf0, 0, 256>, %t22 : <%buf1, 0, 256>) : (memref<256xi32>, memref<256xi32>)
  AIE.memcpy @token1(1, 2) (%t11 : <%buf0, 0, 256>, %t33 : <%buf2, 0, 256>) : (memref<256xi32>, memref<256xi32>)

to (only shows the %t11 side)

  %2 = AIE.mem(%0) {
    %15 = AIE.dmaStart(MM2S0, ^bb1
bug good first issue

Improve this page

Add a description, image, and links to the mlir topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the mlir topic, visit your repo's landing page and select "manage topics."

Learn more