2

At our company, we're planning to develop a set of libraries within a mono-repository, with the goal of maintaining a unified version across all of them. This ensures that when teams include our BOM (Bill of Materials), they can confidently add the required libraries—knowing they’ll always be using a consistent version set.

We're considering adopting a Trunk-Based Development (TBD) approach. One aspect I'm unsure about is whether it’s reasonable to create a new release (i.e., tag a new version) with every PR that gets merged into main.

These libraries are intended for internal use only, and most teams will rely on dependabot to automatically bump versions as needed.

A key requirement for us is to follow Semantic Versioning (SemVer), and we’re planning to use automated version detection—for example, via paulhatch/[email protected].

Do you have any recommendations or best practices to suggest? What are your thoughts on this setup?

asked Jun 6 at 19:42
0

2 Answers 2

1

TL;DR - In the sense of semantic versioning, in my opinion, releasing every merge to main will lead to too many releases, that are not mapped coherently to end user value, or to milestones in the development cycle. Instead, merges to main should be marked with some other identifier, and releases should be done less frequently via a process that takes into account development and end user considerations.

As you probably know, semantic versioning assigns version numbers as major.minor.patch. The question of whether to do a version increment, or not, in response to a merge to main, depends on several considerations. It can be viable, but you need to consider factors like:

  • Your unit test, smoke test, and system test suite, and how often they are run. It may be reasonable to run some unit tests every commit, but due to runtime or overhead, some more extensive tests may only be run on major or minor releases.

  • Your build and deployment infrastructure. Are you able to do a full build every PR commit, or again, is build and deployment complex enough that it's only done for minor or major releases?

  • Your cadence of development and code review, and how often
    development branches are getting merged to main, and after what
    process of review.

  • How rapidly third party dependencies are changing, and whether a
    change in configuration requires additional testing or accreditation before deployment.

  • And of course, how what you develop, and how you label releases, maps to the value to end users, as expressed in requirements and user stories, and as developed via sprints and epics, assuming some sort of agile development process.

These are some of the considerations I would weigh before deciding when to increment versions - there is no "always right" answer IMO.

answered Jun 6 at 20:03
4
  • It's not infrastructure—it's a set of libraries maintained in a single monorepo and deployed to Artifactory. My idea is to create a new release—whether it's a PATCH, MINOR, or MAJOR version—with each PR merged into main. The entire process will be fully automated using GitHub Actions, including testing and deployment. Commented Jun 6 at 20:31
  • 1
    Debating whether this is or is not "infrastructure", which IMO it certainly is, would be missing the point. One can certainly call every merge to main a "patch release", every 10th merge a "minor release", etc. to first order. How that coherently maps to the considerations I discussed , is not at all clear to me. Perhaps you can elaborate your question if I have missed some concern? Commented Jun 6 at 21:03
  • And, what's the bar to merging to main? Is every development branch that passes peer review merged to main? Do you want several patch releases a day? It can be done but is it the most coherent process? I'm not familiar with Artifactory per se but does it allow other versioning mechanisms like tags? Commented Jun 6 at 21:14
  • When you push changes to a feature branch, a SNAPSHOT version of the BOM is automatically released. This version is reviewed and tested through CI, and may also undergo manual testing. Once the branch is merged, the final version is automatically released, tagged in GitHub, and the corresponding artifacts are published. I've automated the semantic version bump, so it will detect whether it should bump MAJOR vs. MINOR or PATCH version. I was worried by the fact there may be a lot of releases a day, and it's a noise for consumers in some way. Commented Jun 7 at 7:11
0

I think its usual to "release" every build of main.

ie, every PR thats merged into main is built with an auto incremented build number maintained by the CI system (github etc) and the build version is used as the release version. Even if you don't actually deploy or publish that release.

So if you are using TBD and you merge a PR, you bump the build version, auto publish your libs and the build version is your release version.

This is simple and can't go wrong.^H^H^H hard to muck up.

The problem I see with paulhatch/semantic-version or similar, is what if you retroactively decide you want to release an older build which wasn't released at the time. It just seems to make your life difficult for no reason.

ie, each merge to main has an "implied/implicit version" which turns into a real version when a tag is applied.

Given that these are internal libraries, there's no need to worry about ugly release numbers. You don't have to release Windows 9 because the market is expecting it after Windows 8, you can release InternalLib 1.0.93124798237429 and no one is going to care that the last release was 1.0.1

Another alternative would be to use GitFlow or similar instead of TBD, then you have a dev branch which can take multiple features and PRs without updating the release number until you merge that back into main.

answered Jun 7 at 11:33
5
  • Could you elaborate on this a bit more? My proposal is to always merge changes when a pull request is merged into main. If we need to apply a security fix to an older version, we can create a branch like release/1.1.x from the appropriate tag—even if main has already advanced to version 2.0.0, for example. Older versions should not receive new features—only critical security fixes. Commented Jun 7 at 11:58
  • you can add new versions, but not use the implied version for a previous merge that passed over and later had some other tagged commit use its implied version. Commented Jun 7 at 13:28
  • Ive added some clarifications. its not clear to me from you post what exactly you are proposing to do. Are you sure you understand that the paulhatch plugin does? Commented Jun 7 at 13:35
  • It's clear for me what it does. It determines next version automatically based on the previous tag and the commits merged into main, whether it's MAJOR, MINOR, etc. I wasn't only sure whether it's okay to release so often, because it may produce noise that in a week or two the library may end up from 1.0.10 at 1.0.30. Commented Jun 7 at 14:14
  • the main thing that library does is it allows you to skip versions. If you just want to increment the patch version automatically you dont need it. I think its fine to have as many versions as builds Commented Jun 7 at 14:33

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.