4

Imagine I have many (micro)services each in a separate git repository. Some business logic code is redundant in all of them. If I need to change the logic I would have to change every project, which requires commit, push, merge, release, deploy.

Now it would be obvious to me to extract that logic into a library in a separate git repository and include it as dependency in the (micro)services.

But then after releasing a new version of the library I would still have to change the dependency in all of the micro(services) and thus still have to commit, push, merge, release, deploy in each of them.

So the question: Is there a better way to improve that workflow?

asked Aug 23, 2024 at 8:55
7
  • 4
    Out of curiosity, is there a reason you're not using a monorepo? Some (including myself) consider that the default design since it avoids issues like this, with separate repositories being an alternative design to be used only in extenuating circumstances. Commented Aug 23, 2024 at 20:59
  • 2
    "each in a separate git repository" - try changing that part Commented Aug 24, 2024 at 17:41
  • Why does an update to a dependency translate to "change the dependency in all of the micro(services) and thus still have to commit, push, merge, release, deploy in each of them"? From the setups I know, if you need to update a dependency in the system, you just do so in the system; it’s a matter of deployment and perhaps building, not changes in the source code repository. At you statically linking all dependencies? Commented Aug 26, 2024 at 6:51
  • @MisterMiyagi Are you not versioning your dependencies? Commented Aug 26, 2024 at 8:21
  • @Bergi I’m not pinning them to specific versions to avoid problems just like that. Specific dependency versions are part of our deployment/configuration management, not hardcoded in our software/packages. Commented Aug 26, 2024 at 9:02

5 Answers 5

25

I will be quite provocative on this one. If you have a lot of shared code between your many microservices, it may be possible that you have artificially broken down your system into too many microservices that are not really independent anymore.

The primary goal of microservices is not to have a separate service for each aggregate, but to have decoupled and independently deployable services. The idea of the shared library and the dependency management that you describe seems not at all aligned with this objective. Are these repositories managed by autonomous teams by the way, or were they just created to enforce the boundaries between the services?

See also this other question, and this article by Chris Richardson who first invented the microservice concept.

candied_orange
120k27 gold badges232 silver badges368 bronze badges
answered Aug 23, 2024 at 11:43
2
  • 4
    Or going at it from a different perspective: if you have a micro service architecture, why is such a central part of your system not a ... micro service? Commented Aug 25, 2024 at 17:05
  • @JörgWMittag that's actually a very good question. If you want to post it as an answer, I would even mark it as the solution. Commented Sep 5, 2024 at 8:07
7

Two other answers already say "avoid that situation". But that's not always a solution. Sometimes, for some reason, you cannot really avoid this if you want or need to stay DRY. So let me focus on the case where a common library for your services is required and justified. So can you at least mitigate the problem?

First thing is, you need to realize that "commit, push, merge, release, deploy" is not your major problem. When certain requirements change, the biggest effort is usually the in the "develop, test, debug" cycle. And when a shared library changes, this can mean

  • change code in the lib, test it locally

  • "deploy" a new version of the lib internally to the repos of all services using it

  • test & debug all that services together with the new library version

  • fix bugs in the services and in the library, which means you may have to provide a new version of the library and repeat the former steps again

To stay effective with these steps, I would recommend the following:

  • Good test automation for the library itself. Changes of a library should be tested locally up to the point where you are relatively sure all serious bugs are corrected. That avoids to run through the cycle scetched above more often than necessary.

  • With each new version, keep the library as backwards compatible as you can (and when you cannot, have a systematic deprecation process in place). Just "updating" to the newest version of a lib should not force developers of the services to invest effort only for formal reasons. If you have 10 services using the lib, but only 2 are affected by a certain change, there should be no obligation to change the other 8.

  • Provide release notes for each library version, so the teams for the individual services have a useful foundation for making the decision if they need to update to the newest version immediately, or if they can stay with the version they are currently using.

  • Good test automation for the dependent services. A new version of the library should be automatically tested in the context of each service, which mitigates the risk to break something by updating the service to this new version. And when such a test reveals a library bug which was overlooked in the local tests, extend the local tests so it won't happen again.

  • Automate the tedious steps, like the steps necessary to upgrade to the newest version of a lib. A few scripts will certainly be helpful.

Library development creates always some overhead, that is unavoidable. This overhead can be kept manageable if one automates certain tasks rigidly, and applies well known configuration management / change management techniques in a disciplined manner.

answered Aug 23, 2024 at 13:03
6

There is no better way to do it unfortunately. The minute you have a shared dependency that has business logic (or anything derived from the requirements) you created a synchronized rollout dependency for all services. As long as you are doing everything yourself anyway, and not separate teams, this may be acceptable.

The only real solution though, is not to have this situation. You'll have to re-think your services to split them without sharing business logic. In "DDD" Eric Evans called this "bounded contexts". I.e. areas of code that more-or-less work independently from each other, without sharing terminology or logic.

answered Aug 23, 2024 at 9:46
0

As a direct answer to your question "Is there a better way to improve that workflow?"

You need an automated deployment process. The push, merge, release, deploy you asked about are import steps, but they don't have to be manual and time consuming.

Modern CI/CD pipeline solutions will greatly help your workflow. Most tools allow pipelines to kick off other pipelines, so you can daisy chain CI processes if your common library receives a change, while deploying only services that directly had changes if your update is not in the common library. These CI tools also integrate with package managers, which will remove your concern of needing to update the library in each package, it can be done in your build and deployment pipeline.

That said, I don't believe the release/deployment is your real issue. I think you should focus more on why the code is duplicated, so that you choose the correct solution for the problem...

The shared code should mostly be common behaviors such as logging, establishing database connections, and other tasks that your services do that are not directly related to the business logic in their specific tasks.

In this case, pulling that shared code out into a single library is the solution you need. Yes, changes to this library will still require a rebuild of the services and redeployment, but it should be very infrequent. At the same time, not centralizing it into a single library requires making the same change multiple times in multiple services, and you still need to redeploy them all. There is no way around redeploying all the services if you change any of this shared code. Also, if you centralize the shared code, you will have a shorter test cycle, and much higher confidence in deploying all the services knowing that there is only one instance of a change instead of many that could potentially have slight variations.

If the shared code revolves around business logic and execution, you may need to revaluate the design, and separation, of your services. Some may need to be combined if the tasks are similarly related enough to create this scenario. You don't have to arbitrarily create more independent services if the tasks are not really unique.

answered Aug 23, 2024 at 23:48
0

Imagine I have many (micro)services each in a separate git repository. Some business logic code is redundant in all of them. If I need to change the logic I would have to change every project, which requires commit, push, merge, release, deploy.

I assume you know this, but there are essentially three possibilities here. Most of time, when you have some pieces of code that are tightly coupled together like this, they fall into one of three broad categories:

  • The separate pieces of code shouldn't be separate. They should be one piece of code. In other words, you decomposed too much.
  • The piece of code that couples them all together should be another separate piece of code. In other words, you didn't decompose enough.
  • There really is a tight coupling between those pieces of code. In other words, the real-world concepts represented by the code concepts really are tightly coupled.

Now it would be obvious to me to extract that logic into a library in a separate git repository and include it as dependency in the (micro)services.

It sounds like you already have determined that your code falls into category #2 above, and that you need to separate out the commonalities.

However, you made an interesting design choice here: you have a microservice architecture, i.e., every real-world concept is represented by a (micro)service, and (micro)services are your primary unit of decomposition.

Then, why make this a library instead of a service?

You said in your question that this is business logic. In other words, this is important logic, since really, the only thing that matters is your business logic. Everything that isn't business logic is ultimately irrelevant, it is only there to shuffle data around to meet your business goals. The business logic is what your code is about.

This is also a central piece of your business logic, otherwise why would so much of your code depend on it?

Therefore, if you have a microservice architecture, surely such an important piece of business logic should be modeled as a microservice.

Then you can use all the same techniques you are already using to manage dependencies between your microservices (whether they are version dependencies, deployment dependencies, etc.) to also manage the dependencies on this piece of business logic.

If your changes are backwards-compatible, you don't need to do anything: just deploy the new business logic, and with the very next API call any of the dependents makes, they get the new logic. If your changes are not backwards-compatible, make a new API version, but make sure you still support the old one. Then do a rolling upgrade of your dependent services – no need to upgrade them all at once.

answered Sep 8, 2024 at 11:40

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.