Struggle with ONNX not running on Cuda
Waifu diffusion runs under ONNX. But ONNX refuses to run on the GPU
I tracked this down to
/home/volker/workspace/venvs/img-caption-DfsJQ_L--py3.11/lib/python3.11/site-packages/onnxruntime/capi/onnxruntime_inference_collection.py
# initialize the C++ InferenceSession
sess.initialize_session(providers, provider_options, disabled_optimizers)Since the code vanishes into C++ I canot debug further and can only blame Microsoft.
>>> providers
['CUDAExecutionProvider', 'CPUExecutionProvider']
Even some hacks like
self._providers =
['CUDAExecutionProvider']
do not help. ONNX is installed as 1.16.2 ofonnx-runtime-gpu
It claims that 'CUDAExecutionProvider' is available>>> ort.get_all_providers()
['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'MIGraphXExecutionProvider', 'ROCMExecutionProvider', 'OpenVINOExecutionProvider', 'DnnlExecutionProvider', 'TvmExecutionProvider', 'VitisAIExecutionProvider', 'QNNExecutionProvider', 'NnapiExecutionProvider', 'JsExecutionProvider', 'CoreMLExecutionProvider', 'ArmNNExecutionProvider', 'ACLExecutionProvider', 'DmlExecutionProvider', 'RknpuExecutionProvider', 'WebNNExecutionProvider', 'XnnpackExecutionProvider', 'CANNExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider']
Typical Microsoft Software:
* Does not work
* Shows option
* Does not work with option
Stay tuned on
https://stackoverflow.com/questions/77509215/onnx-not-using-gpu
Since this is not solved I will try to get independent of Microsoft and find a way to use the WD14 model in pytorch directly.
So please stay tune to this also.
Cheers,
Volker