Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

RAM <-> VRAM paging? #16466

Unanswered
Hexorg asked this question in Q&A
Discussion options

I have 12GB GPU and 128GB CPU. I can do ~64 Tok/s on GPU, but as soon as one layer is on CPU it drops down to ~12 Tok/s.

I couldn't find any discussion/approaches for dynamic paging of model layers - e.g. load first 12 layers, compute 12th layer output, load next 12 layers, compute 24th layer output - all on GPU. Is doing such paging really slower than letting the CPU crunch through numbers?

You must be logged in to vote

Replies: 3 comments

Comment options

Yes, that has been attempted in the past, it is very slow.

You must be logged in to vote
0 replies
Comment options

For generating tokens you're I/O bound. Loading the data from RAM to VRAM and then from VRAM into the GPU is going to be slower than just loading the weights from RAM into the CPU.

You must be logged in to vote
0 replies
Comment options

Would on-device dynamic decompression be worth looking into? Model parameters, KV etc. are fairly compressible even before quantization. So one would load a compressed model to VRAM (faster I/O) and dynamically decompress the next layer weights & cached KV in parallel to the inference step. For batches sized N, keep up to N decompressed layers in a circular buffer as values propagate, given the data hazard between layers. Futhermore, a just-in-time prefetcher thats loads RAM->VRAM the next compressed portion of a large model, overwriting stale data, could compensate for the I/O latency. Does this make any sense?

You must be logged in to vote
0 replies
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

AltStyle によって変換されたページ (->オリジナル) /