microsoft / onnxruntime Public
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
FunctionImpl should consider nested subgraphs when updating graph inputs/outputs
#10876
opened Mar 15, 2022 by
jantonguirao
Problems with predictions on MacBook Air with M1 chip in Java project based on Maven
#10874
opened Mar 15, 2022 by
pwittchen
Does WebGL fail when network inputs are not dimensions in powers of two?
component:ort-web
#10873
opened Mar 15, 2022 by
nicollegah
DmlExecutionProvider is much slower than CPUExecutionProvider
ep:DML
#10856
opened Mar 13, 2022 by
zk-talentech
Windows 32 bit performance much slower than 64bit?
component:build
component:coreruntime
#10855
opened Mar 12, 2022 by
ZiyueWangUoB
does nnapi (armv7a or armv8a) support model contain lstm ?
ep:NNAPI
#10854
opened Mar 12, 2022 by
mlinxiang
C++ is 10x slower compared with Python, CPU only
component:coreruntime
#10849
opened Mar 11, 2022 by
Roios
Missing documentation on LayerNormalization contrib spec
component:coreruntime
#10839
opened Mar 10, 2022 by
eralmual
Plan to support OnnxRuntimeTraining on Windows?
component:training-core
#10837
opened Mar 10, 2022 by
chethanpk
I can't find a example for GPU multi input
component:documentation
#10834
opened Mar 10, 2022 by
pycoco
CMake build doesn't generate a config file
component:build
status:contributions-welcome
#10818
opened Mar 9, 2022 by
matteosal
OnnxRuntime GPU very poor perfomance in .Net
component:coreruntime
ep:CUDA
#10805
opened Mar 8, 2022 by
sportbilly21
Inference time of onnxruntime gpu increases at very high batch sizes
component:coreruntime
ep:CUDA
#10789
opened Mar 7, 2022 by
nssrivathsa
ONNX models give slower inference in Python Multiprocessing
component:coreruntime
#10786
opened Mar 6, 2022 by
NikhilBartwal
Warning after exporting repeat_interleave to onnx with dynamic axes
#10783
opened Mar 5, 2022 by
MartynaKsyta
Non-zero status code returned while running LSTM node
component:coreruntime
#10768
opened Mar 4, 2022 by
martin3252
can build on windows with Geforce 1060 card, cuda 11.0 cudnn 8.0.2 successfully?
ep:CUDA
#10763
opened Mar 4, 2022 by
cqray1990
Shape Inference does not work with onnx1.11.0 due to onnx shape inference enchancement
component:coreruntime
release:1.11
#10761
opened Mar 3, 2022 by
liqunfu
Previous Next
ProTip!
Exclude everything labeled
bug with -label:bug.