Lately, I've been working on a project which basically is a huge rewrite in .NET Core F# + Event Sourcing + PostgreSQL of an old sub-ledger legacy app written in .NET 4.6 C# + SQL Server.
Since the whole rework thing cannot happen overnight and the legacy process needs to run till every single piece is tested replaced, we opted for distributed transactions via the TransactionScope
class, it usually works, but the tradeoff is that you need clean up orphaned transactions in case that there is a crash (and basically that can happen whenever you're updating a service), chances are not high, but that can still happened and already happened.
Long story short, we need to keep a certain consistency between what is written in the legacy system (ie. SQL Server) and what is written is the new system (ie. PostgreSQL) until everything is fine, it's a critical system so can't really mess up with it.
So I'm wondering is there really an alternative when it comes to write some data into both databases (albeit with a different format)?
So that we can have the guarantee that the transaction has worked out (or not) for both DBs (I put the emphasis on both, cause it should be either true or false). What we absolutely want to avoid is that there is a piece of data written into one and not other (system).
I've heard about the saga pattern but not too sure how this can be applied in that context knowing that we can't change much the legacy system.
2 Answers 2
You are correct that you will need a saga if you are not using a transaction. You will need to accept eventually consistent data though.
When it comes to a saga, you have 2 options:
- Using an orchestrator: A controlling process
- Choreographed: Each step handles passing on the next or compensating.
Using the 2nd method you would store the data + the intent to store the data in the other database in a single transaction. You would then have a process running to ensure the data is in the second database.
If you're using event sourcing then create a workflow (saga) from this source
startWorkflow(event). =
orchestration(event)
orchestration(event) =
try
{
callActivity WriteToSql event
callActivity WriteToPostgress event
}
catch
{
callActivty Compensation event // push to a Queue of compensations to be resolved.
}
}
Compensation is another workflow.
This is a lot more complex than using Transactionscope but is more flexible in the sense that you may have activities that need to be rolled back that do not support transaction scope or that are long running.
Alternatively why don't you write a routine that constantly clears up the the orphan transactions?
-
Hi, thanks for your answer. Alternatively why don't you write a routine that constantly clears up the the orphan transactions? This is basically what I ended up doing. Just curious about alternatives.Natalie Perret– Natalie Perret05/16/2020 15:42:48Commented May 16, 2020 at 15:42
-
I came to realize that Saga can be a good solution but turns out there is a lot of work to be done, yes more flexible but at what cost.Natalie Perret– Natalie Perret05/16/2020 15:43:29Commented May 16, 2020 at 15:43
Explore related questions
See similar questions with these tags.
use
orusing
keyword is not enough to get you covered in case of a failed distributed transaction. Something that can happen at the wrong time can still happen and already happened, usually (but on rare occasions) when you're not shutting down a service not that gracefully.