Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Vulkan backend inference is non-deterministic with stabilityai/sd-vae-ft-mse exported via ExecuTorch (no quantization used) #15344

Open
Assignees
Labels
module: vulkanIssues related to the Vulkan delegate and code under backends/vulkan/
@JinKyungEun000

Description

🐛 Describe the bug

Summary

I exported stabilityai/sd-vae-ft-mse (Stable Diffusion VAE) with ExecuTorch and run it with the Vulkan backend on Android.
Even though I did not enable any quantization, the model’s reconstruction output changes between runs with the same input. I initially suspected FP16 casting or a quantization side effect, but the original Hugging Face model is not FP16-only, and I didn’t apply quantization in export or runtime.

Additionally:

I modified the model so that the attention in the middle block is replaced with a skip (bypass).

In a Python (PyTorch) environment, this modified model does not exhibit non-determinism and never outputs all-zeros; it only shows a ≈0.1 dB performance drop (e.g., PSNR) compared to the original.

I’d like to confirm:

Whether the Vulkan backend may implicitly cast FP32 → FP16 at any point, and

Why the decoding output flips between all-zeros and a normal-looking image across runs.

Model

Base model: stabilityai/sd-vae-ft-mse

Modification: middle block attention replaced with a skip/bypass.

Confirmed on Hugging Face that default weights are not forced to FP16.

Exported with ExecuTorch; no quantization enabled.

What I did

Load an image normalized to [-1, 1] and feed it into the exported VAE encoder.

Compute reparameterization latents (mean/std logged; also sample via uniform→quantile mapping).

Feed latents into the decoder.

Repeatedly run the exact same pipeline (same inputs, same seeds/config).

Expected behavior

Deterministic (or at least stable) reconstruction given the same inputs, with no hidden quantization if I didn’t request it.

Actual behavior

Run A: The reconstruction is all zeros (min=max=mean=std=0).

Run B: The reconstruction looks normal (non-zero stats), using the exact same inputs and code path.

Encoder/latent stats are stable across runs; the divergence appears at/after decoding.

Logs

Two back-to-back runs with identical inputs (trimmed; timestamps differ).
You can see encoder input and latent stats match, but the final reconstruction differs:

Image

Notes / Questions

I did not enable ExecuTorch quantization or any PTQ/QAT recipes.

Is the Vulkan delegate performing implicit FP16 execution or down-casting on devices with FP16-favored paths?

Are there known non-deterministic kernels or uninitialized buffer issues in the Vulkan backend that could produce an all-zero output intermittently on decoder passes?

Any recommended flags to force FP32 end-to-end (or to disable FP16 fast-math/relaxed precision) when using Vulkan with ExecuTorch?

Versions

Collecting environment information...
PyTorch version: 2.10.0.dev20251015+cpu
Is debug build: False
CUDA used to build PyTorch: Could not collect
ROCM used to build PyTorch: N/A

OS: Ubuntu 20.04.6 LTS (x86_64)
GCC version: (Ubuntu 9.4.0-1ubuntu1~20.04.2) 9.4.0
Clang version: 10.0.0-4ubuntu1
CMake version: version 3.31.6
Libc version: glibc-2.31

Python version: 3.10.19 | packaged by conda-forge | (main, Oct 13 2025, 14:08:27) [GCC 14.3.0] (64-bit runtime)
Python platform: Linux-5.15.0-139-generic-x86_64-with-glibc2.31
Is CUDA available: False
CUDA runtime version: Could not collect
CUDA_MODULE_LOADING set to: N/A
GPU models and configuration: GPU 0: NVIDIA RTX A6000
Nvidia driver version: 570.133.20
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.6
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.6
Is XPU available: False
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True

CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
Address sizes: 43 bits physical, 48 bits virtual
CPU(s): 96
On-line CPU(s) list: 0-95
Thread(s) per core: 1
Core(s) per socket: 48
Socket(s): 2
NUMA node(s): 2
Vendor ID: AuthenticAMD
CPU family: 23
Model: 49
Model name: AMD EPYC 7642 48-Core Processor
Stepping: 0
Frequency boost: enabled
CPU MHz: 1500.000
CPU max MHz: 2300.0000
CPU min MHz: 1500.0000
BogoMIPS: 4600.34
Virtualization: AMD-V
L1d cache: 3 MiB
L1i cache: 3 MiB
L2 cache: 48 MiB
L3 cache: 512 MiB
NUMA node0 CPU(s): 0-47
NUMA node1 CPU(s): 48-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Mitigation; untrained return thunk; SMT disabled
Vulnerability Spec rstack overflow: Mitigation; SMT disabled
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl and seccomp
Vulnerability Spectre v1: Mitigation; usercopy/swapgs barriers and __user pointer sanitization
Vulnerability Spectre v2: Mitigation; Retpolines; IBPB conditional; STIBP disabled; RSB filling; PBRSB-eIBRS Not affected; BHI Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc cpuid extd_apicid aperfmperf rapl pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_llc mwaitx cpb cat_l3 cdp_l3 hw_pstate ssbd mba ibrs ibpb stibp vmmcall fsgsbase bmi1 avx2 smep bmi2 cqm rdt_a rdseed adx smap clflushopt clwb sha_ni xsaveopt xsavec xgetbv1 xsaves cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local clzero irperf xsaveerptr rdpru wbnoinvd arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif v_spec_ctrl umip rdpid overflow_recov succor smca sme sev sev_es

Versions of relevant libraries:
[pip3] executorch==1.1.0a0+4421558
[pip3] numpy==2.2.6
[pip3] pytorch_tokenizers==0.1.0
[pip3] torch==2.10.0.dev20251015+cpu
[pip3] torchao==0.14.0+git01849b2b1
[pip3] torchaudio==2.8.0.dev20251015+cpu
[pip3] torchdata==0.11.0
[pip3] torchsr==1.0.4
[pip3] torchtune==0.6.1
[pip3] torchvision==0.25.0.dev20251015+cpu
[conda] executorch 1.1.0a0+4421558 pypi_0 pypi
[conda] numpy 2.2.6 pypi_0 pypi
[conda] pytorch-tokenizers 0.1.0 pypi_0 pypi
[conda] torch 2.10.0.dev20251015+cpu pypi_0 pypi
[conda] torchao 0.14.0+git01849b2b1 pypi_0 pypi
[conda] torchaudio 2.8.0.dev20251015+cpu pypi_0 pypi
[conda] torchdata 0.11.0 pypi_0 pypi
[conda] torchsr 1.0.4 pypi_0 pypi
[conda] torchtune 0.6.1 pypi_0 pypi
[conda] torchvision 0.25.0.dev20251015+cpu pypi_0 pypi

cc @SS-JIA @manuelcandales @digantdesai @cbilgin

Metadata

Metadata

Assignees

Labels

module: vulkanIssues related to the Vulkan delegate and code under backends/vulkan/

Type

No type

Projects

No projects

Milestone

No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      AltStyle によって変換されたページ (->オリジナル) /