Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

EPP Multi-tenancy #736

sriumcp started this conversation in Ideas
Apr 22, 2025 · 6 comments · 1 reply
Discussion options

I have a question re: guidance for implementers.

Is the intent behind the current inference model and inference pool design the following?

  1. There is namespace isolation between base models: specifically, each base model gets deployed in its own k8s namespace.
  2. There is an InferencePool that targets a given base model. So, exactly one inference pool (and one base model) per k8s namespace.
  3. There can be multiple LoRA adapters for a given base model. All LoRA adapters must be loaded onto all pods for the given base model.

I'm not sure about upcoming enhancements to the CRDs, but I am trying to understand if above is the manner in which the current CRDs are intended to be used.

Thanks in advance for your clarifications!

You must be logged in to vote

Replies: 6 comments 1 reply

Comment options

The InferencePool is a grouping of compute, typically model servers that all share the same base model, yes, and it is namespace scoped, also yes. But just to clarify, it is intended to be able to have multiple InferencePools in the same namespace even if the pools are of the same base model, so long as there are no overlapping model servers(pods) in the selector. So 2. is not correct (unless there is a bug I'm unaware of).

3 is currently correct

You must be logged in to vote
0 replies
Comment options

Follow up question.

Re: epp, is the intent to have possibly a single epp deployment that can be referenced within multiple inference pools (that may be created in different namespaces)?

You must be logged in to vote
0 replies
Comment options

This has come up quite a bit, I think the jury is still out. Personally, I'm concerned that multi-tenancy could turn out to be an anti-pattern as it creates a single point of failure, and applies pressure to any scale issues that may occur.

For context: we intend to support more inference-routing specific features such as: Prefix Aware Routing, which will require quite a bit of memory space on the EPP, additionally we expect to have callouts for things like RAG, or tokenization of the input (just for examples). This will require quite a bit more computational & memory overhead. I think multi-tenancy would hit scale limits faster

You must be logged in to vote
0 replies
Comment options

I added it to our agenda for our weekly Th meeting as this has come up enough recently, if you have time to join and have opinions, would love to hear them there.

meeting info here: https://github.com/kubernetes-sigs/gateway-api-inference-extension?tab=readme-ov-file#contributing

You must be logged in to vote
0 replies
Comment options

It seems to me that inference pools inside the same namespace should have the option of referring to the same epp.

This enables isolation across namespaces, and also reuse of epp within a single namespace.

You must be logged in to vote
0 replies
Comment options

We discussed this in the OSS meeting today some, when the recording is available i can link it, do you have a use case for reusing the EPP within a namespace? Is it simpler ops?

You must be logged in to vote
1 reply
Comment options

@sriumcp converted this to a discussion since that seems more appropriate for now, we can siphon of actionables as they come up.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Ideas
Labels
None yet
2 participants
Converted from issue

This discussion was converted from issue #724 on April 24, 2025 19:49.

AltStyle によって変換されたページ (->オリジナル) /