-
-
Notifications
You must be signed in to change notification settings - Fork 10.6k
is there a way to run vllm without torch.compiled model? #11051
carlesoctav
announced in
Q&A
-
i try to debug with print statement but it cannot be done on torch.compiled model.
Beta Was this translation helpful? Give feedback.
All reactions
Replies: 1 comment
-
VLLM_USE_V1=0
Beta Was this translation helpful? Give feedback.
All reactions
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment