We're looking to split up a monolith. In order to do so, we've identified some business areas that look like good candidates for subdomains, and we're trying to figure out how to split the functionality across those subdomains.
The domain is sports league management software.
We've tentatively identified a few subdomains, three of which are scheduling (when and where is the fixture), accounting (how much should each team be billed for a fixture) and competition management (if Team A wins a fixture, how many points are they allocated etc).
An example of the functionality we're trying to split is the deletion of a fixture, which is a sports match between two teams.
When it comes to deleting a fixture, all three of these subdomains may need to get involved. Specifically, the deletion will free up the space that was previously taken, the teams involved should no longer be charged for the fixture, and the league standings need to be updated because the results of the fixture are now gone.
Similarly, all three subdomains may have a say in whether or not a fixture actually can be deleted. Well, perhaps not scheduling, but accounting might say "Can't delete that fixture, Team A has already paid for it, it should be cancelled instead", and competition management may say "Can't delete that fixture, we've already calculated the particpant of a subsequent fixture based on the result of this fixture".
We've sketched out an architecture where each subdomain can provide validators and handlers for particular commands. So the command bus would find all the validators from all the subdomains, loop through them and, within the context of a transaction, ask if the process is ok to go. If they all say "GO!", the bus then loops through all the command handlers and passes them the command, and finally either rollbacks the transaction due to an error in any of the handlers, or commits the transaction and publishes any events that need publishing.
So far so good. Only I've not come across anyone recommending such an approach. All the recommendations seem to be that command should have one command handler only, and that events should be used to drive subsequent processing in other subdomains, or potentially to create additional commands to put onto the command bus. Which is fine, but what about the case where the command should not be processed due to some validation error known only by a subdomain 3 events down the line?
Are we barking up the wrong tree here? Are we talking about Sagas? Or application services? Or does the concept of multiple command validators and handlers actually seem feasible? Can anyone suggest any pitfalls we're not spotting?
Thanks for your opinions!
EDIT
It should be noted that this is not a multi tenanted app, each customer has their own site, and so scaling is achieved by adding machines horizontally - ie, more customers = another server. All the services will run on the same machine and can participate in the same transaction. We're not necessarily looking to move to a layout with different services/microservices running on different machines - just trying to better encapsulate the logic for each subdomain into separate sections of the code so it becomes easier for new devs to pick up and understand and maintain.
1 Answer 1
Event sourcing isn't particularly good at transactions or changing direction mid-stream. It's like stepping into an office space with a megaphone to announce a fact or change and everyone in the room can draw their own conclusions. Let's say there was a "customer moved to new address" event. Everyone updates their systems. If one fails, the error handling will likely consist of retrying the operation until it succeeds. There isn't really a going back to the previous state. Because of the loose coupling there isn't a good way to tell all other participants to roll back their transaction because you don't know who the receivers of the message are.
You can try a rails or saga approach where a coordinator collects all approvals before initiating the operations, but it's not truly transactional because of the time gap between the time of check and time of execution.
You can try and send out a command with a precondition (delete if fixture is more than 14 days in the future) but you have to be pretty careful not to codify the different services internal states across services.
There is a chance, if the new issues introduced by the microservices transition are too hard to solve, that bounded contexts are not drawn correctly or even the service in it's current design can't be split up to the degree as planned.
Is deletion a requirement? Many systems I have seen didn't allow for hard deletes (for objects to be removed). You could move objects into a "removed" state but you could not truly remove them from the system.
-
I considered a model where say the scheduling subdomain is responsible for deletion, and thus has all knowledge of whether a fixture can be deleted - say the fixture aggregate in that subdomain has a
HasBeenPaidFor
property and aHasDependentFixtures
property. But that feels like leaking business logic from one subdomain to another (calculation of theHasDependentFixtures
can be quite complex), and feels like the architecture driving the model instead of the other way round. Even if we had a "removed" state, I think all subdomains would need to be allowed to prevent change to that state.MajorRefactoring– MajorRefactoring02/19/2020 08:29:00Commented Feb 19, 2020 at 8:29 -
As an architect I am not looking for 100% adherence to ideals, If you have a reason for unavoidable exception, document it and move on. A smaller component which breaks the rules by having cross domain knowledge or cross domain database access is still better than a tightly coupled code base. At the end of the day we do need working software making the best decisions at the time with the information available.Martin K– Martin K02/19/2020 13:28:47Commented Feb 19, 2020 at 13:28
Explore related questions
See similar questions with these tags.
DeletionBlocker
list, part of the fixtures subdomain?