We've been trying to reduce bandwidth usage at naev by switching to LFS. The results are great in terms of saving bandwidth and disk space, but there's a small issue that the assets repo has 2000+ small files in LFS (total <200 MiB). Apparently, these files count towards the request limit which I believe is 4,000 requests per 30 minutes. So just doing a checkout + a push would trigger the temporary 429 status. This is not something we ran into issues with when we didn't use LFS, despite the repo being almost 3x larger.
We've been able to workaround this issue with continuous integration by using heavy caching, however, I still hit this personally while working on the repository. My question is, would it be possible to split the LFS usage from normal browsing for the request limit or have some other way to mitigate the issue when using repos with lots of small files? An extreme example would be a repo with 4,000 small LFS files which would not allow cloning in one go, which is not an issue with a normal git repo no matter how large it is.
### Comment
We've been trying to reduce bandwidth usage at [naev](https://codeberg.org/naev/naev) by switching to LFS. The results are great in terms of saving bandwidth and disk space, but there's a small issue that the assets repo has 2000+ small files in LFS (total <200 MiB). Apparently, these files count towards the request limit which I believe is 4,000 requests per 30 minutes. So just doing a checkout + a push would trigger the temporary 429 status. This is not something we ran into issues with when we didn't use LFS, despite the repo being almost 3x larger.
We've been able to workaround this issue with continuous integration by using heavy caching, however, I still hit this personally while working on the repository. My question is, **would it be possible to split the LFS usage from normal browsing for the request limit** or have some other way to mitigate the issue when using repos with lots of small files? An extreme example would be a repo with 4,000 small LFS files which would not allow cloning in one go, which is not an issue with a normal git repo no matter how large it is.