Our development team works with a micro-service node.js architecture. This means we work with a very large amount of repositories that currently each take care of their own dependencies.
To avoid having to do the same set up in each of our services, we have created a private node.js package that handles some of this for us, which allows us to, for example, set up a RESTful service in a couple of lines, uniformly configured as we like.
The main idea is to avoid having variations of the same bits of code in multiple services, mainly for stuff we generally use in a large portion of our repositories:
- Setting up RESTful server
- Handling messages from a queue
- Creating report files
- Fetching and handling large amounts of data
- ...
But recently, we've come across the idea that we should also we adding support for node.js packages, for instance: the MongoDB driver for node.js.
There are a couple of small bits that we do each time, in every service connecting to MongoDB. An example of that is that when we create a connection, we pass in certain options regarding the app name that's creating the connection and some timeout options. Currently this is the same in each of our repos, and whenever we have a connection that does not have an app name, that interferes with trying to figure out support issues. The idea would be to wrap establishing that connection in our private package, and maybe even have that integrated in the set up of the RESTful server.
Another reason for doing that would be to protect the code in our microservices from breaking changes in the node.js packages.
With each major version of the MongoDB driver, we run a risk of having to deal with breaking changes. For instance, in version 5.x, they removed support for the insert()
function, with the advice to use either the insertMany()
or insertOne()
function.
This currently poses a big problem for a micro-service approach, since updating to a new major version might require additional changes in each of these repos.
The idea would then be to have an insert()
function for MongoDB in the private package, and handle any breaking changes there instead of in each separate repo.
Not everyone in the team feels this is a good approach, though. One of the breaking changes in the MongoDB driver was dropping support for callbacks, and adding support back in for that might not be ideal.
Is anyone familiar with this kind of approach, wrapping functions of something like drivers to prevent variations in code and try to keep updates backwards compatible? Our there more downsides that upsides for this?
-
2I'm confused why this is a problem; in a microservice architecture, every service owns its own datastore, so they can update their datastore and the driver in sync.Philip Kendall– Philip Kendall02/21/2025 11:25:59Commented Feb 21 at 11:25
-
While that is true, it is at the same time also one of the downsides of microservice architecture. Maintaining libraries for hundreds of microservices becomes a job on its own after a while, and you'll most likely run into services running on outdated libraries evetuelly.dreagan– dreagan02/21/2025 12:38:11Commented Feb 21 at 12:38
-
2For components which are used by many people, like the MongoDB driver, I would expect the vendor not to break backwards compatibility without a good reason. And when there is such a good reason, your own wrapper code will usually not be able to mitigate it.Doc Brown– Doc Brown02/21/2025 12:42:30Commented Feb 21 at 12:42
-
4Are you sure microservices are the best approach for you, and that you are not building a distributed monolith? One of the main benefits of microservices is allowing teams to be more independent, and this benefit is most pronounced for large projects, i.e. hundreds/thousands of developers. Where the overhead of microservices is less then the cost of synchronizing changes between all developers.JonasH– JonasH02/21/2025 13:35:49Commented Feb 21 at 13:35
-
@JonasH It sounds like the library takes care of the conventions related to communication between the services. Seems sensible to have some common code for orchestration and on both ends of a connection. Sharing that code in a common dependency is better than duplicating it I'd say.Bergi– Bergi02/21/2025 23:43:39Commented Feb 21 at 23:43
3 Answers 3
This sounds like the old trick of breaking direct dependence on 3rd party code.
Rather than let mentions of 3rd party code spread through your code base you wrap calls to it. Not everything, just what you need. This way you’ve separated an expression of your needs from an implementation that satisfies those needs. Now when that implementation changes the impact of that change is isolated.
The risk is that 3rd party implementation structure choices leak into the wrapper structure and so changes to the 3rd party code force changes to the interface of the wrapper. Which means change is again spreading through the code base. That can make this idea pointless.
Rather than blindly wrap, carefully consider your needs when designing this interface. Make it stable. Make abstractions that don’t leak.
Some go as far as using DIP to invert dependencies and minimize how much the code base knows about the 3rd party code.
It’s another level of indirection. Which can solve any problem, except having too many levels of indirection.
The idea would be to wrap establishing that connection in our private package
Sounds fine to me. Sharing that code is better than duplicating it. You already have experience with this from your other bits of shared code, it sounds like this is working well for you.
... and maybe even have that integrated in the set up of the RESTful server.
Not sure if you mean integrating the MongoDb driver wrapper into your server setup package, or whether it would be a separate (second) internal package. I would recommend the latter, as not all of your services need a MongoDb connection, that dependency should be optional. If it's only setup code that has mongodb
as a peer dependency at best, and still needs to be called explicitly, it might work in the first package as well.
If you expect some other types of database to be integrated in your services, extract that into a proper abstraction, possibly with some dependency inversion.
But yeah, go ahead!
Another reason for doing that would be to protect the code in our microservices from breaking changes in the node.js packages
That doesn't sound like a good reason for introducing the new package to me. Wrappers generally increase complexity, and it's really really hard to design them so that no breaking changes in the underlying library leak through them. Yes, some changes can be eliminated with an adapter pattern, but I recommend a YAGNI approach: don't create the adapter until you need it. JavaScript/TypeScript is really flexible in this regard. It can however help to restrict yourself to some subset of the very rich interface that the mongodb library provides.
But whether you wrap the db driver or not, do not expect the interface of your internal library to stay frozen forever. It will change eventually, adding new methods for new functionality and deprecating (removing) others. How stable it will be depends on your needs, but at some point even the whole design may change. Nobody wants to use callbacks forever, when promises are so much nicer to work with.
But you deal with those changes like you do for any library, whether it's your own or an external one: you version it. Not all services need to use the same version at the same time, but you need a process for migrating them. Typically each service team does that on its own - do you not have experience with this from changes with your existing internal library?
Your title is a bit confusing but I'm going to say "No." what you describe, which I read as "create our own company wrappers which implement standard setup for 3rd party libraries." is a Bad Idea.
The first reason is simple. It introduces a huge organisational impediment to change.
Say you have 50 microservices all using your standard "SetupStandardRestServer" library. But on MS 51 you want to use a feature of the underlying lib which isn't exposed in your wrapper.
You goto the wrapper code and make a change. But now you have potentially broken the 50 other microservices that use the library the next time they upgrade.
Maybe the changes required an upgrade to the underling package and now standard lib A isn't compatible with standard lib B any more.
Maybe you want to use a new framework version, but before you do that you have to upgrade all 50 libraries to use that same version.
Rather than use the new and innovative approach, its easiest to just stick with the standard. Your organisation's code becomes stuck in an old single approach to solving a problem.
This applies doubly if you have bought into the idea of independent Microservices! don't add a shared dependency to them all!
The second reason is the effort it costs to maintain and support these libraries.
Let's say you resist the impediment to change, and aggressively add features and changes to your standard libs. This makes them complicated, they have to support a variety of options and configuration. All of which needs testing and documentation.
Before you know it your standard wrapper is just as complex as the thing it wraps. Except you can't google for answers to how to use it, because it's bespoke to your organisation.
Now some growing proportion of your organisations dev time is spent maintaining the standard libraries rather than writing features. New hires can't write a microservice without reading the docs and possibly digging through the code of your libraries.
The third reason is someone has already done it.
The library you are wrapping has been developed by a dedicated team to provide a generic well thought through interface which supports all the options. You don't need to duplicate this work. Just have an example setup, or send an email "when you connect to the db remember to include the app name for reporting!", "check out this new framework for specifying node REST endpoints with minimal code I found on the web" etc etc
Explore related questions
See similar questions with these tags.