Issues: microsoft/onnxruntime
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
onnxruntime causing high CPU while typing in Visual Studio 17.8.4
platform:windows
issues related to the Windows platform
#19132
opened Jan 13, 2024 by
Shashank231190
SIGSEGV when calling onnxValue.close()
api:Java
issues related to the Java API
#19125
opened Jan 13, 2024 by
lambdrew
Engine successfully converts to ONNX, but onnxruntime returns an error when attempting to run inference
ep:CUDA
issues related to the CUDA execution provider
ep:TensorRT
issues related to TensorRT execution provider
platform:windows
issues related to the Windows platform
#19119
opened Jan 12, 2024 by
ninono12345
ORT returns incorrect result for UINT8 Matmul on specific CPU
core runtime
issues related to core runtime
#19109
opened Jan 12, 2024 by
arui-yyz
[Performance] The CUDA Stream cannot be set through Python API
ep:CUDA
issues related to the CUDA execution provider
ep:TensorRT
issues related to TensorRT execution provider
#19094
opened Jan 11, 2024 by
gedoensmax
[Documentation] Moe算子的说明
documentation
improvements or additions to documentation; typically submitted using template
#19091
opened Jan 11, 2024 by
chuanxiangWei
[Training] On device training doesn't work with INT8 Models
ep:CUDA
issues related to the CUDA execution provider
platform:mobile
issues related to ONNX Runtime mobile; typically submitted using template
training
issues related to ONNX Runtime training; typically submitted using template
#19078
opened Jan 10, 2024 by
IzanCatalan
cudaMemcpyAsync throws exception in GPUDataTransfer
ep:CUDA
issues related to the CUDA execution provider
#19076
opened Jan 10, 2024 by
laxnpander
[Build] onnxruntime infer dynamic scale imgs in windows
build
build issues; typically submitted using template
core runtime
issues related to core runtime
platform:windows
issues related to the Windows platform
#19075
opened Jan 10, 2024 by
TwinkleStarst
How can use onnx model for inference with AMD GPU in Windows?
ep:CUDA
issues related to the CUDA execution provider
ep:DML
issues related to the DirectML execution provider
platform:windows
issues related to the Windows platform
#19061
opened Jan 9, 2024 by
francismelon
[Documentation] [Question] Why some tests cannot be performed in Parallel ?
core runtime
issues related to core runtime
documentation
improvements or additions to documentation; typically submitted using template
#19042
opened Jan 8, 2024 by
claeyz
[Documentation] Both new LLama-7B examples are now broken
documentation
improvements or additions to documentation; typically submitted using template
model:transformer
issues related to a transformer model: BERT, GPT2, Hugging Face, Longformer, T5, etc.
#19040
opened Jan 8, 2024 by
ricpruss
Exception at GradientBuilderBase when trying to export pytorch vision transformer gradient graph for training
training
issues related to ONNX Runtime training; typically submitted using template
#19038
opened Jan 8, 2024 by
pan-mic
[Build] Linux x86_64 STATIC Build
build
build issues; typically submitted using template
#19035
opened Jan 7, 2024 by
abdullahaygun
Freeing tensor data created via CreateTensor
core runtime
issues related to core runtime
#19034
opened Jan 6, 2024 by
vymao
[Build] Error when building a nujet package with OpenVINO and DML
build
build issues; typically submitted using template
ep:DML
issues related to the DirectML execution provider
ep:OpenVINO
issues related to OpenVINO execution provider
platform:windows
issues related to the Windows platform
#19031
opened Jan 6, 2024 by
xeeetu
[Build] deploying the EfficientAD anomaly detection algorithm, an error occurred while executing the "Run" command
build
build issues; typically submitted using template
#19030
opened Jan 6, 2024 by
B1SH0PP
How to transform Ort::Value (int64) into cv::Mat (or cv::cuda::Mat) in C++ cuda?
ep:CUDA
issues related to the CUDA execution provider
platform:windows
issues related to the Windows platform
#19029
opened Jan 6, 2024 by
Koruvika
[Performance] It is not possible to use a discrete graphics card with DML.
ep:DML
issues related to the DirectML execution provider
platform:windows
issues related to the Windows platform
#19025
opened Jan 5, 2024 by
NeuralAIM
[Performance] Inference session creation takes too long
core runtime
issues related to core runtime
#19022
opened Jan 5, 2024 by
neNasko1
[Error: Exception in HostFunction: <unknown>] while running ort models in react-native
api:Javascript
issues related to the Javascript API
platform:mobile
issues related to ONNX Runtime mobile; typically submitted using template
#19021
opened Jan 5, 2024 by
eumentis-madhurzanwar
ONNX Runtime inference on string input
platform:windows
issues related to the Windows platform
#19006
opened Jan 4, 2024 by
giuseppeboezio
Encounter unknown exception in initialize using Openvino EP
ep:OpenVINO
issues related to OpenVINO execution provider
#19004
opened Jan 4, 2024 by
yanzhechen
Incorrect result for converted FP16 model with Conv Op when run on arm64 Linux with onnxruntime >= 1.15.0
ep:ArmNN
issues related to Arm NN execution provider
#18992
opened Jan 3, 2024 by
jasonkit
[Training] [On-device-training] Is it possible to build an onnxruntime-training Python module without onnx and torch deps
training
issues related to ONNX Runtime training; typically submitted using template
#18991
opened Jan 3, 2024 by
ahlzouao
Previous Next
ProTip!
Mix and match filters to narrow down what you’re looking for.