Codeberg/Community
54
325
Fork
You've already forked Community
12

LFS easily hits requests limits to hit 429 codes #2242

Open
opened 2025年11月30日 12:43:02 +01:00 by bobbens · 3 comments

Comment

We've been trying to reduce bandwidth usage at naev by switching to LFS. The results are great in terms of saving bandwidth and disk space, but there's a small issue that the assets repo has 2000+ small files in LFS (total <200 MiB). Apparently, these files count towards the request limit which I believe is 4,000 requests per 30 minutes. So just doing a checkout + a push would trigger the temporary 429 status. This is not something we ran into issues with when we didn't use LFS, despite the repo being almost 3x larger.

We've been able to workaround this issue with continuous integration by using heavy caching, however, I still hit this personally while working on the repository. My question is, would it be possible to split the LFS usage from normal browsing for the request limit or have some other way to mitigate the issue when using repos with lots of small files? An extreme example would be a repo with 4,000 small LFS files which would not allow cloning in one go, which is not an issue with a normal git repo no matter how large it is.

### Comment We've been trying to reduce bandwidth usage at [naev](https://codeberg.org/naev/naev) by switching to LFS. The results are great in terms of saving bandwidth and disk space, but there's a small issue that the assets repo has 2000+ small files in LFS (total <200 MiB). Apparently, these files count towards the request limit which I believe is 4,000 requests per 30 minutes. So just doing a checkout + a push would trigger the temporary 429 status. This is not something we ran into issues with when we didn't use LFS, despite the repo being almost 3x larger. We've been able to workaround this issue with continuous integration by using heavy caching, however, I still hit this personally while working on the repository. My question is, **would it be possible to split the LFS usage from normal browsing for the request limit** or have some other way to mitigate the issue when using repos with lots of small files? An extreme example would be a repo with 4,000 small LFS files which would not allow cloning in one go, which is not an issue with a normal git repo no matter how large it is.
Owner
Copy link

Thank you for the feedback, I acknowledge this being a problem. I assume the limit mostly happens on initial clones, and updates of large repos only fetches a few new objects?

I haven't looked in how pushing LFS files work. Does it do a request for every file to see if it already exists on the server?

Thank you for the feedback, I acknowledge this being a problem. I assume the limit mostly happens on initial clones, and updates of large repos only fetches a few new objects? I haven't looked in how pushing LFS files work. Does it do a request for every file to see if it already exists on the server?
Author
Copy link

@fnetX wrote in #2242 (comment):

Thank you for the feedback, I acknowledge this being a problem. I assume the limit mostly happens on initial clones, and updates of large repos only fetches a few new objects?

Yes, it was quite a problem with our CI actions, but we were able to solve it with lots of caching (which is better for the long run I guess).

I haven't looked in how pushing LFS files work. Does it do a request for every file to see if it already exists on the server?

Sorry for the confusion, by push, I meant changing changing many small files. I do not believe pushing without changing files would trigger it.

I do not think it does a request for every file, only when it sees a pointer to a file that is not locally cached.

@fnetX wrote in https://codeberg.org/Codeberg/Community/issues/2242#issuecomment-8663982: > Thank you for the feedback, I acknowledge this being a problem. I assume the limit mostly happens on initial clones, and updates of large repos only fetches a few new objects? Yes, it was quite a problem with our CI actions, but we were able to solve it with lots of caching (which is better for the long run I guess). > I haven't looked in how pushing LFS files work. Does it do a request for every file to see if it already exists on the server? Sorry for the confusion, by push, I meant changing changing many small files. I do not believe pushing without changing files would trigger it. I do not think it does a request for every file, only when it sees a pointer to a file that is not locally cached.
Owner
Copy link

I have excluded LFS from rate-limiting for now. But indeed, caching them locally is for us much better than downloading 2GB of assets for every CI run.

I have excluded LFS from rate-limiting for now. But indeed, caching them locally is for us much better than downloading 2GB of assets for every CI run.
Sign in to join this conversation.
No Branch/Tag specified
main
No results found.
Labels
Clear labels
accessibility

Reduces accessibility and is thus a "bug" for certain user groups on Codeberg.
bug

Something is not working the way it should. Does not concern outages.
bug
infrastructure

Errors evidently caused by infrastructure malfunctions or outages
Codeberg

This issue involves Codeberg's downstream modifications and settings and/or Codeberg's structures.
contributions welcome

Please join the discussion and consider contributing a PR!
docs

No bug, but an improvement to the docs or UI description will help
duplicate

This issue or pull request already exists
enhancement

New feature
infrastructure

Involves changes to the server setups, use `bug/infrastructure` for infrastructure-related user errors.
legal

An issue directly involving legal compliance
licence / ToS

involving questions about the ToS, especially licencing compliance
please chill
we are volunteers

Please consider editing your posts and remember that there is a human on the other side. We get that you are frustrated, but it's harder for us to help you this way.
public relations

Things related to Codeberg's external communication
question

More information is needed
question
user support

This issue contains a clearly stated problem. However, it is not clear whether we have to fix anything on Codeberg's end, but we're helping them fix it and/or find the cause.
s/Forgejo

Related to Forgejo. Please also check Forgejo's issue tracker.
s/Forgejo/migration

Migration related issues in Forgejo
s/Pages

Issues related to the Codeberg Pages feature
s/Weblate

Issue is related to the Weblate instance at https://translate.codeberg.org
s/Woodpecker

Woodpecker CI related issue
security

involves improvements to the sites security
service

Add a new service to the Codeberg ecosystem (instead of implementing into Gitea)
upstream

An open issue or pull request to an upstream repository to fix this issue (partially or completely) exists (i.e. Gitea, Forgejo, etc.)
wontfix

Codeberg's current set of contributors are not planning to spend time on delegating this issue.
Milestone
Clear milestone
No items
No milestone
Projects
Clear projects
No items
No project
Assignees
Clear assignees
No assignees
2 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
Codeberg/Community#2242
Reference in a new issue
Codeberg/Community
No description provided.
Delete branch "%!s()"

Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?