Struggle with ONNX not running on Cuda

Waifu diffusion runs under ONNX. But ONNX refuses to run on the GPU

I tracked this down to


        # initialize the C++ InferenceSession
        sess.initialize_session(providers, provider_options, disabled_optimizers)

Since the code vanishes into C++ I canot debug further and can only blame Microsoft.

>>> providers
['CUDAExecutionProvider', 'CPUExecutionProvider']

Even some hacks like

self._providers = ['CUDAExecutionProvider']

do not help. ONNX is installed as 1.16.2 of


It claims that  'CUDAExecutionProvider' is available

>>> ort.get_all_providers()
['TensorrtExecutionProvider', 'CUDAExecutionProvider', 'MIGraphXExecutionProvider', 'ROCMExecutionProvider', 'OpenVINOExecutionProvider', 'DnnlExecutionProvider', 'TvmExecutionProvider', 'VitisAIExecutionProvider', 'QNNExecutionProvider', 'NnapiExecutionProvider', 'JsExecutionProvider', 'CoreMLExecutionProvider', 'ArmNNExecutionProvider', 'ACLExecutionProvider', 'DmlExecutionProvider', 'RknpuExecutionProvider', 'WebNNExecutionProvider', 'XnnpackExecutionProvider', 'CANNExecutionProvider', 'AzureExecutionProvider', 'CPUExecutionProvider']

Typical Microsoft Software:
* Does not work
* Shows option
* Does not work with option

Stay tuned on

Since this is not solved I will try to get independent of Microsoft and find a way to use the WD14 model in pytorch directly.
So please stay tune to this also.