0

We are dealing with a third party API that doesn't handle concurrency, and we don't have access to the database directly. Our client application is deployed in a cluster environment and has multiple worker nodes sending update requests to this API. Our goal is to wrap this API in another REST API, so we could add a service layer on top to control the concurrency so that all requests will be synchronized in a pessimistic lock fashion. It will certainly work if we deploy our wrapper API to a non-cluster environment. Will this approach still work if we deploy our wrapper API to a cluster environment as well? My concern is if the pessimistic lock will be shared across all worker nodes?

asked Jul 10, 2018 at 12:27

1 Answer 1

1

I would advise against that. The problem is that you need to serialize all requests, not grind your application to a halt.

A better mechanism is to use a Message Queue. Whether you roll your own, or (preferably) use an existing purpose built system, you can have your API push messages on to the Queue, and your wrapper pop messages off to submit.

The problem with locking the last step in an asynchronous by design construct is that you run the very real risk of deadlock. Even if you manage to avoid deadlock, your wrapper API will spend a very long time resolving lock contention.

If you separate the request from when it is processed using a Queue, you avoid the deadlock or lock contention issues that will impact your application. If you rely on the response, it does complicate things a little in that you have to wait until you are notified that the response was received. You can use the message queue for that purpose as well.

answered Jul 10, 2018 at 15:20
2
  • Thanks for the advice. I like the idea of using a message queue. As for the original design, are you saying in a cluster environment it might create deadlock or contention from multiple deployments of the wrapper API? I am not quite knowledgeable in how pessimistic lock works in a cluster. Could you answer that question? Commented Jul 11, 2018 at 1:28
  • 1
    It's hard enough understanding how threads interact within the same process, much less across multiple web services. Locking is a blunt instrument. It does the job, but at the cost of contention between other threads. If one thread is waiting to acquire a resource that another thread already has, but that thread is stuck waiting on a resource that is held by the first then you have a classic deadlock. They also come in very convoluted chains. Even if there is no opportunity for deadlocks, thread contention alone will bring the service to it's knees. Commented Jul 12, 2018 at 12:09

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.