-
Notifications
You must be signed in to change notification settings - Fork 186
-
I have a question re: guidance for implementers.
Is the intent behind the current inference model and inference pool design the following?
- There is namespace isolation between base models: specifically, each base model gets deployed in its own k8s namespace.
- There is an InferencePool that targets a given base model. So, exactly one inference pool (and one base model) per k8s namespace.
- There can be multiple LoRA adapters for a given base model. All LoRA adapters must be loaded onto all pods for the given base model.
I'm not sure about upcoming enhancements to the CRDs, but I am trying to understand if above is the manner in which the current CRDs are intended to be used.
Thanks in advance for your clarifications!
Beta Was this translation helpful? Give feedback.
All reactions
Replies: 6 comments 1 reply
-
The InferencePool is a grouping of compute, typically model servers that all share the same base model, yes, and it is namespace scoped, also yes. But just to clarify, it is intended to be able to have multiple InferencePools in the same namespace even if the pools are of the same base model, so long as there are no overlapping model servers(pods) in the selector. So 2. is not correct (unless there is a bug I'm unaware of).
3 is currently correct
Beta Was this translation helpful? Give feedback.
All reactions
-
👍 1
-
Follow up question.
Re: epp, is the intent to have possibly a single epp deployment that can be referenced within multiple inference pools (that may be created in different namespaces)?
Beta Was this translation helpful? Give feedback.
All reactions
-
This has come up quite a bit, I think the jury is still out. Personally, I'm concerned that multi-tenancy could turn out to be an anti-pattern as it creates a single point of failure, and applies pressure to any scale issues that may occur.
For context: we intend to support more inference-routing specific features such as: Prefix Aware Routing, which will require quite a bit of memory space on the EPP, additionally we expect to have callouts for things like RAG, or tokenization of the input (just for examples). This will require quite a bit more computational & memory overhead. I think multi-tenancy would hit scale limits faster
Beta Was this translation helpful? Give feedback.
All reactions
-
I added it to our agenda for our weekly Th meeting as this has come up enough recently, if you have time to join and have opinions, would love to hear them there.
meeting info here: https://github.com/kubernetes-sigs/gateway-api-inference-extension?tab=readme-ov-file#contributing
Beta Was this translation helpful? Give feedback.
All reactions
-
👍 1
-
It seems to me that inference pools inside the same namespace should have the option of referring to the same epp.
This enables isolation across namespaces, and also reuse of epp within a single namespace.
Beta Was this translation helpful? Give feedback.
All reactions
-
We discussed this in the OSS meeting today some, when the recording is available i can link it, do you have a use case for reusing the EPP within a namespace? Is it simpler ops?
Beta Was this translation helpful? Give feedback.
All reactions
-
@sriumcp converted this to a discussion since that seems more appropriate for now, we can siphon of actionables as they come up.
Beta Was this translation helpful? Give feedback.