I have a complex process implemented in Java Spring microservice. Currently this process is triggered on user request and it is synchronously executed. This often results in a gateway timeout. Execution can last from 30s to 10min, also I does not want to increase gateway timeout. So I want to execute it asynchronously on a user request and immediately return status 202. There are few options:
- On the controller side spawn a new thread and execute this process in a spawned thread.
- Change the process implementation so it is always executed in a new thread.
- Add a new component like AsyncProcessExecutor which executes process in a new thread.
AsyncProcessExecutor seems like a best option it does not clutter the controller nor already complex process. AsyncProcessExecutor could be extended easly to send slack notification on process completion.
-
1Is this micro service accepting HTTP calls or does it have a message queue sitting in front of it?Greg Burghardt– Greg Burghardt2025年05月02日 12:11:17 +00:00Commented May 2 at 12:11
-
No queue, just HTTP API. Developers are users of that microservice. They HTTP API for executing processes. Maybe API call should add a message to a queue which will be used for asynchronous process execution? @GregBurghardtDimitrijeCiric– DimitrijeCiric2025年05月02日 12:41:26 +00:00Commented May 2 at 12:41
2 Answers 2
Execution can last from 30s to 10min
This is too long to execute in an http server. They are designed to send quick responses back to lots of requests.
You need two seperate programs.
- The http server/REST API, recieve the request, put the message on a database/message queue/file and respond with "OK Working on this! id : 1234"
- The worker process, poll the database/message queue/directory. When you see a new message, open it up and do your thing. When finished put the result on a database/message queue/file
- The REST API again, but a different endpoint. Listen for requests asking "is job 1234 done yet?" check the db/mq/result directory for the result of job 1234. Is it there? if yes, open it and send it back in the response.
*obvs im giving the most basic idea here. Sprinkle with your technologies of choice
-
I don't get it. HTTP is designed for this. It IS upgradable ro Websockets, it DOES have async status codes and polling headers.Basilevs– Basilevs2025年05月02日 21:59:20 +00:00Commented May 2 at 21:59
-
Not really, but post your answer, I'm sure there are more complex and interesting ways of getting around the issues. This is just the super simple approach.Ewan– Ewan2025年05月03日 09:52:21 +00:00Commented May 3 at 9:52
-
Thank you, this seems like a good solution with simple implementation. It could be extended, I could spawn a new process for each request and that could be used to prevent daily restart for already working processes, also it would increase fault tolerance.DimitrijeCiric– DimitrijeCiric2025年05月03日 11:38:53 +00:00Commented May 3 at 11:38
-
1if your work is cpu bound then theres not much point spawning more processes, but you can have more computers running the worker and looking at the db/queue for new jobsEwan– Ewan2025年05月03日 15:33:57 +00:00Commented May 3 at 15:33
-
1You are right, but I believe 70% or more is the IO work. We fetch whole tables in batches from 2 databases and compare them.DimitrijeCiric– DimitrijeCiric2025年05月03日 15:54:43 +00:00Commented May 3 at 15:54
From the client perspective there are several approaches pull, push and pull and push. Pull is when time to time the client requests a result, an implementation of progress bar principle. Push it is when the server knows its clients and sends the result to each client needing it. Pull and push is when the client first requests a result and after the first client request the server remembers the client and pushes the result to it when the result is available, an implementation of "don't call me I'll call you" principle.
From the server perspective every detail of client perspective flips around, pull turns to push and vice versa.
Explore related questions
See similar questions with these tags.