-
Notifications
You must be signed in to change notification settings - Fork 1.2k
-
|
Hello team, I'm currently exploring solutions to serve a custom model and would appreciate your insights on whether my use case is feasible with text-generation-inference (TGI). My model requires a custom embedding logic based not only on
When creating the embedding for an I implemented this in a Hugging Face model by customizing the Now, my question is: How feasible is it to replicate this logic using TGI? I initially tried vLLM, but found it challenging to access the full sequence of Any guidance on whether TGI supports this type of positional logic—or if there’s a recommended way to achieve it—would be greatly appreciated! Thanks in advance! |
Beta Was this translation helpful? Give feedback.