Skip to content

very slow in inference  #10764

@meloxuan

Description

@meloxuan

Hello, I am using torch. Onnx. Export to convert the trained PTH model to ONNX, but I find that the reasoning time is obviously slow

Metadata

Metadata

Assignees

No one assigned

    Labels

    core runtimeissues related to core runtimestaleissues that have not been addressed in a while; categorized by a bot

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions