Codeberg/Community
54
325
Fork
You've already forked Community
12

F-Droid: Git fetch failed #1678

Open
opened 2024年10月19日 10:04:53 +02:00 by toz · 18 comments

Comment

My F-Droid app fails to build because of a "Git fetch failed" when trying to pull the sources from Codeberg. Is this issue already known? Anything I can do?
https://monitor.f-droid.org/builds/log/com.zell_mbc.medilog/5462

You may have fixed this already in which case I guess I just need to wait for the next F-Droid cycle?

### Comment My F-Droid app fails to build because of a "`Git fetch failed`" when trying to pull the sources from Codeberg. Is this issue already known? Anything I can do? https://monitor.f-droid.org/builds/log/com.zell_mbc.medilog/5462 You may have fixed this already in which case I guess I just need to wait for the next F-Droid cycle?
Owner
Copy link

There have been no changes to our rate-limiting recently that would explain the situation, but I could imagine that more fdroid projects moving to Codeberg could cause the situation. Raising the ratelimit globally is not a sensible option right now, because Codeberg is recently hit by new crawlers, effectively bringing down the service for everyone else.

We have tried to look for the fdroid build server IP ranges, but couldn't find any information about this yet.

There have been no changes to our rate-limiting recently that would explain the situation, but I could imagine that more fdroid projects moving to Codeberg could cause the situation. Raising the ratelimit globally is not a sensible option right now, because Codeberg is recently hit by new crawlers, effectively bringing down the service for everyone else. We have tried to look for the fdroid build server IP ranges, but couldn't find any information about this yet.
Author
Copy link

Thanks for getting back Otto, that would explain things indeed. The build server just started another build cycle, https://monitor.f-droid.org/builds/running
I hope this time they are able to connect. We'll see...

By the way, I found them to be pretty responsive on their Matrix channel https://matrix.to/#/#fdroid:f-droid.org, maybe that's a place to get the IPs?

Thanks for getting back Otto, that would explain things indeed. The build server just started another build cycle, https://monitor.f-droid.org/builds/running I hope this time they are able to connect. We'll see... By the way, I found them to be pretty responsive on their Matrix channel https://matrix.to/#/#fdroid:f-droid.org, maybe that's a place to get the IPs?
Owner
Copy link

Hi there! A quick update:

  • there was some rather unhelpful conversation on Mastodon about F-Droid not wanting to share the IP addresses and "go figure it out yourself" https://social.librem.one/@eighthave/113355457493452098
  • we spend some days investigating the access logs and getting some more verbose logging
  • we have identified a potential IP range that could be F-Droid (I would be shocked, if they were using this provider, but well), and unblocked this
  • F-Droid has, in parallel, mirrored some repos to GitLab to make the builds succeed
  • as per this comment the builds succeeded before the mirror was used, so it looks like we have indeed unblocked the correct IP range for F-Droid
  • the buildlog here indeed seems to indicate a successful build from Codeberg: https://monitor.f-droid.org/builds/log/com.zell_mbc.medilog/5462#site-footer

Now the issue is that we probably should not check in this change into our public version control, because F-Droid wants to keep secrecy about their IP range and provider. I honestly don't know how we can make the change permanent without disclosing the relation to F-Droid.

Hi there! A quick update: - there was some rather unhelpful conversation on Mastodon about F-Droid not wanting to share the IP addresses and "go figure it out yourself" https://social.librem.one/@eighthave/113355457493452098 - we spend some days investigating the access logs and getting some more verbose logging - we have identified a potential IP range that could be F-Droid (I would be shocked, if they were using this provider, but well), and unblocked this - F-Droid has, in parallel, mirrored some repos to GitLab to make the builds succeed - as per [this comment](https://chaos.social/@hiddengravitas/113368353489265207) the builds succeeded before the mirror was used, so it looks like we have indeed unblocked the correct IP range for F-Droid - the buildlog here indeed seems to indicate a successful build from Codeberg: https://monitor.f-droid.org/builds/log/com.zell_mbc.medilog/5462#site-footer Now the issue is that we probably should not check in this change into our public version control, because F-Droid wants to keep secrecy about their IP range and provider. I honestly don't know how we can make the change permanent without disclosing the relation to F-Droid.
Author
Copy link

From what I read here and elsewhere this seems to be resolved now?
My last build went through ok so I will mark this as resolved.

Thanks for looking into it...

From what I read here and elsewhere this seems to be resolved now? My last build went through ok so I will mark this as resolved. Thanks for looking into it...

Since Codeberg's blocking seems to have caused new issues, I wanted to comment. We do not have F-Droid IP addresses, we use various hosting providers, and services move between providers and locations. We're not using any weird IP ranges. I don't know the issues Codeberg is facing, but blocking large IP blocks from well known cloud/hosting providers does not seem like a workable long term solution.

Since Codeberg's blocking seems to have caused new issues, I wanted to comment. We do not have F-Droid IP addresses, we use various hosting providers, and services move between providers and locations. We're not using any weird IP ranges. I don't know the issues Codeberg is facing, but blocking large IP blocks from well known cloud/hosting providers does not seem like a workable long term solution.

Since Codeberg's blocking seems to have caused new issues,

@fnetX context, the Gitlab CI running F-Droid checkupdate is being rejected, here's an example from today: https://gitlab.com/fdroid/checkupdates-runner/-/jobs/11364961243#L2843 ping @arne

F-Droid does not know, nor control Gitlab.com server hosting, might be Google Cloud something

> Since Codeberg's blocking seems to have caused new issues, @fnetX context, the Gitlab CI running F-Droid checkupdate is being rejected, here's an example from today: https://gitlab.com/fdroid/checkupdates-runner/-/jobs/11364961243#L2843 ping @arne F-Droid does not know, nor control Gitlab.com server hosting, might be Google Cloud something

We're not using any weird IP ranges. I don't know the issues Codeberg is facing,

We hugely respect F-Droid's work, but we must be able to cooperate together with those that heavily use our infrastructure – in this context, we are trying to give preferential treatment on request, like we do for other projects (i.e. most recently Luanti).

We'll try to do a whitelist as a temporary solution, but in order to enable this preferential treatment we need some way of identifying the other party. We usually ask for the IP address of the system accessing our infrastructure in this case (which we sought out ourselves back then), but this seems to be problematic for F-Droid in the long run and we are looking forward to your proposal of a different way guarantee you higher limits (which might well be unlimited access). We're not a Big Tech company and I know you also aren't, so I hope that we can use the fact that we are on the same boat to work together for the benefit of our users and the greater community.

> We're not using any weird IP ranges. I don't know the issues Codeberg is facing, We hugely respect F-Droid's work, but we must be able to cooperate together with those that heavily use our infrastructure – in this context, we are trying to give preferential treatment on request, like we do for other projects (i.e. most recently Luanti). We'll try to do a whitelist as a temporary solution, but in order to enable this preferential treatment we need some way of identifying the other party. We usually ask for the IP address of the system accessing our infrastructure in this case (which we sought out ourselves back then), but this seems to be problematic for F-Droid in the long run and we are looking forward to your proposal of a different way guarantee you higher limits (which might well be unlimited access). We're not a Big Tech company and I know you also aren't, so I hope that we can use the fact that we are on the same boat to work together for the benefit of our users and the greater community.

@n0toose
Since @eighthave said, they "do not have F-Droid IP addresses" we need to figure out another solution.

@n0toose Since @eighthave said, they "do not have F-Droid IP addresses" we need to figure out another solution.

Yeah, I know. Either way, I think I speak for everyone when I say that we (Codeberg) really want to solve this problem between us and F-Droid for good. Despite the new logistics involved in operating websites for the public. It's really a cat and mouse game right now.

Yeah, I know. Either way, I think I speak for everyone when I say that we (Codeberg) really want to solve this problem between us and F-Droid for good. *Despite* the new logistics involved in operating websites for the public. It's really a cat and mouse game right now.

I hope we (Codeberg) and F-Droid can work together to make it work

I hope we (Codeberg) and F-Droid can work together to make it work

Do keep in mind that the people maintaining the core infra for F-Droid are already overbooked.

Do keep in mind that the people maintaining the core infra for F-Droid are already overbooked.

For any dev reading, we can't grab new updates automatically (the issue now), afaik building works fine.

So, if the morning (UTC lol) checkupdate cycle did not update your app metadata with your latest release, please open a MR yourself.

For any dev reading, we can't grab new updates automatically (the issue now), afaik building works fine. So, if the morning (UTC lol) checkupdate cycle did not update your app metadata with your latest release, please open a MR yourself.
Author
Copy link

As far as my two apps are concerned: PublicArtExplorer got through the other day, MediLog is still stuck because I moved the build number to gradle.properties which seems to require a change at the F-Droid end. Forgot about this dependency after things worked fine for more than 6 years. Is this change of receipt something I can fix at my end? With a MR?

As far as my two apps are concerned: PublicArtExplorer got through the other day, MediLog is still stuck because I moved the build number to gradle.properties which seems to require a change at the F-Droid end. Forgot about this dependency after things worked fine for more than 6 years. Is this change of receipt something I can fix at my end? With a MR?

@toz let's not continue this here, just open an issue in our gitlab fdroiddata :)

@toz let's not continue this here, just open an issue in our gitlab fdroiddata :)

Do keep in mind that the people maintaining the core infra for F-Droid are already overbooked.

@eighthave, it's your choice and we respect it because we get it. We also spend a lot of our free time working together with other projects on our free time, and also have other things we can do instead and lives of our own to lead. Although we are not sure if that's a workable solution in the long term either, you're welcome to give us a haul should your CI runs stop succeeding again -- we think we know how to whitelist them now, so it'd take less time compared to what @fnetX mentioned earlier. (If you want a more direct line of contact, we're happy to offer that as well.)

Please understand that we cannot completely lift our restrictions on AI scrapers to let F-Droid in (which is what I believe that you were practically asking); some of which may/do reside on your incidental "IP address neighbors" to the detriment of the tens of thousands using our own platform to collaborate with others in real-time:

  • Many people choose our platform and find protections against their own code being scraped as a positive.
  • It also wouldn't be good if neither F-Droid nor anyone else could git clone anything.
  • That would not be an optimal investment of donation-funded hardware and operation costs.
  • It would also not be a good allocation of human efforts.

Should your full schedules clear up at some point, you know where to find us: Again, we'd be interested in making room in own respective schedules to explore options (i.e. setting a user agent in your Git config, SSH, whatever - don't know how your build server works) for the benefit of the community. I understand this is not like you'd expect from e.g. GitHub, but we are not GitHub - same way as you aren't Google Play Store and choose not to e.g. spend resources on hosting proprietary apps.

P.S. We have been discussing ways to improve our AI methods internally, but I don't think they'd necessarily positively fix this situation in particular given this situation's particular circumstances.

> Do keep in mind that the people maintaining the core infra for F-Droid are already overbooked. @eighthave, it's your choice and we respect it because we get it. We also spend a lot of our free time working together with other projects on our free time, and also have other things we can do instead and lives of our own to lead. Although we are not sure if that's a workable solution in the long term either, you're welcome to give us a haul should your CI runs stop succeeding again -- we think we know how to whitelist them now, so it'd take less time compared to what @fnetX mentioned earlier. (If you want a more direct line of contact, we're happy to offer that as well.) Please understand that we cannot completely lift our restrictions on AI scrapers to let F-Droid in (which is what I believe that you were practically asking); some of which may/do reside on your incidental "IP address neighbors" to the detriment of the tens of thousands using our own platform to collaborate with others in real-time: - Many people choose our platform and find protections against their own code being scraped as a positive. - It also wouldn't be good if neither F-Droid nor anyone else could `git clone` anything. - That would not be an optimal investment of donation-funded hardware and operation costs. - It would also not be a good allocation of human efforts. Should your full schedules clear up at some point, you know where to find us: Again, we'd be interested in making room in own respective schedules to explore options (i.e. setting a user agent in your Git config, SSH, whatever - don't know how your build server works) for the benefit of the community. I understand this is not like you'd expect from e.g. GitHub, but we are not GitHub - same way as you aren't Google Play Store and choose not to e.g. spend resources on hosting proprietary apps. P.S. We have been discussing ways to improve our AI methods internally, but I don't think they'd necessarily positively fix this situation in particular given this situation's particular circumstances.

@n0toose did you see my #1678 (comment) above? It's not about the F-Droid build server now

@n0toose did you see my https://codeberg.org/Codeberg/Community/issues/1678#issuecomment-7207690 above? It's not about the F-Droid build server now

It's not about the F-Droid build server now

Apologies for the small inaccuracy, but the principle remains the same: We'll think about this problem in general, but any way to discern the actor (even if it's "security through obscurity") so as to give you a "bypass" is something we'd be interested in exploring. I tried to establish a point of communication here; I'll leave the stage and make room for the people doing the system administration work.

> It's not about the F-Droid build server now Apologies for the small inaccuracy, but the principle remains the same: We'll think about this problem in general, but any way to discern the actor (even if it's "security through obscurity") so as to give you a "bypass" is something we'd be interested in exploring. I tried to establish a point of communication here; I'll leave the stage and make room for the people doing the system administration work.
Owner
Copy link

@licaon-kter Thank you for providing more helpful resources and a more respectful tone in the conversation.

Let me explain the situation for us. Recently, Codeberg has struggled multiple times under excessive cloning from Google Cloud IP ranges. We believe that massive-cloning from Codeberg is usually not necessary and have applied restrictions on http Git operations, slowly adjusting to the situation.

From our side, it is impossible to differentiate between F-Droid update checks running in GitLab runners running on Google Cloud versus malicious actors running on Google Cloud. Doing a whitelist on Google Cloud IP ranges is definitely not going to work for us as they have much higher force than we can possibly work with.

We believe that doing Git clones for things like update checking is a wasteful approach. Approaches like ls-remote or using the API are likely more effective for this purpose.

So the offers from our side are the two approaches mentioned above plus respecting the limits (which is currently 150 events per 30 minutes, so either 150 rather lightweight operations like ls-remotes or ~75 full clones):

zone gitop {
match {
path */git-upload-pack
method POST
}
key gitop-{client_ip}
window 30m
events 150

I fear that we currently can't help any further. The offer to add IP addresses that are under control of a specific project still stands. But adding large IP ranges from cloud providers is unfortunately not possible.

@licaon-kter Thank you for providing more helpful resources and a more respectful tone in the conversation. Let me explain the situation for us. Recently, Codeberg has struggled multiple times under excessive cloning from Google Cloud IP ranges. We believe that massive-cloning from Codeberg is usually not necessary and have applied restrictions on http Git operations, slowly adjusting to the situation. From our side, it is impossible to differentiate between F-Droid update checks running in GitLab runners running on Google Cloud versus malicious actors running on Google Cloud. Doing a whitelist on Google Cloud IP ranges is definitely not going to work for us as they have much higher force than we can possibly work with. We believe that doing Git clones for things like update checking is a wasteful approach. Approaches like [`ls-remote`](https://stackoverflow.com/questions/10649814/get-last-git-tag-from-a-remote-repo-without-cloning) or [using the API](https://codeberg.org/api/swagger#/repository/repoListTags) are likely more effective for this purpose. So the offers from our side are the two approaches mentioned above plus respecting the limits (which is currently 150 events per 30 minutes, so either 150 rather lightweight operations like ls-remotes or ~75 full clones): https://codeberg.org/Codeberg-Infrastructure/scripted-configuration/src/commit/251b9179254231c7958a5726f0a33e6dc8225b04/hosts/_reverseproxy/etc/caddy/forgejo-prod.site#L67-L74 I fear that we currently can't help any further. The offer to add IP addresses that are under control of a specific project still stands. But adding large IP ranges from cloud providers is unfortunately not possible.
Sign in to join this conversation.
No Branch/Tag specified
main
No results found.
Labels
Clear labels
accessibility

Reduces accessibility and is thus a "bug" for certain user groups on Codeberg.
bug

Something is not working the way it should. Does not concern outages.
bug
infrastructure

Errors evidently caused by infrastructure malfunctions or outages
Codeberg

This issue involves Codeberg's downstream modifications and settings and/or Codeberg's structures.
contributions welcome

Please join the discussion and consider contributing a PR!
docs

No bug, but an improvement to the docs or UI description will help
duplicate

This issue or pull request already exists
enhancement

New feature
infrastructure

Involves changes to the server setups, use `bug/infrastructure` for infrastructure-related user errors.
legal

An issue directly involving legal compliance
licence / ToS

involving questions about the ToS, especially licencing compliance
please chill
we are volunteers

Please consider editing your posts and remember that there is a human on the other side. We get that you are frustrated, but it's harder for us to help you this way.
public relations

Things related to Codeberg's external communication
question

More information is needed
question
user support

This issue contains a clearly stated problem. However, it is not clear whether we have to fix anything on Codeberg's end, but we're helping them fix it and/or find the cause.
s/Forgejo

Related to Forgejo. Please also check Forgejo's issue tracker.
s/Forgejo/migration

Migration related issues in Forgejo
s/Pages

Issues related to the Codeberg Pages feature
s/Weblate

Issue is related to the Weblate instance at https://translate.codeberg.org
s/Woodpecker

Woodpecker CI related issue
security

involves improvements to the sites security
service

Add a new service to the Codeberg ecosystem (instead of implementing into Gitea)
upstream

An open issue or pull request to an upstream repository to fix this issue (partially or completely) exists (i.e. Gitea, Forgejo, etc.)
wontfix

Codeberg's current set of contributors are not planning to spend time on delegating this issue.
Milestone
Clear milestone
No items
No milestone
Projects
Clear projects
No items
No project
Assignees
Clear assignees
No assignees
6 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
Codeberg/Community#1678
Reference in a new issue
Codeberg/Community
No description provided.
Delete branch "%!s()"

Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?