Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Commit 3e69319

Browse files
authored
llama : update llama_decode_internal ref [no ci] (ggml-org#11840)
This commit updates the comment in llama_kv_cache.h to reflect the change of the function name from llama_decode_internal to llama_decode_impl.
1 parent a394039 commit 3e69319

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

‎src/llama-kv-cache.h‎

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@ struct llama_kv_cache {
3737
bool can_shift = false;
3838

3939
// Note: The value of head isn't only used to optimize searching
40-
// for a free KV slot. llama_decode_internal also uses it, so it
40+
// for a free KV slot. llama_decode_impl also uses it, so it
4141
// cannot be freely changed after a slot has been allocated.
4242
uint32_t head = 0;
4343
uint32_t size = 0;

0 commit comments

Comments
(0)

AltStyle によって変換されたページ (->オリジナル) /