I am working on an application right now, which is fully paginated. The application basically providers the client (a browser SPA) with a list of all data items other users added, and it does that in chunks of 50. So when entering the page, the user gets to see 50 items, and once scrolled to the bottom, 50 more items are loaded.
The items are not static, and the creator of a data item is allowed to change the item at any time, may that be description / name etc. - The exact change does not really matter.
When an update occurs, I'd like to keep the other clients who might have the updated data item loaded already up to date.
Now I'm wondering, how to do this.
I see a few options:
- Don't send any realtime update, wait for the client to receive new data when refreshing the page.
- Send a notification through a websocket, so the SPA can pick it up immediately and update the item.
I am planning to go with the second route, but I'm a bit uncertain about how to best implement it.
Since I want the solution to be scalable, I don't really want to send an update for every data item to every client, and apply it in case the item is loaded there, which would be the naive implementation. Safe to say, this won't really scale well, since it's a fan-out which might be unnecessary and discarded in a lot of cases.
In my opinion, the best solution would be to keep a mapping on the server, which keeps track of who has which data item id's in cache. This should be done implicitly (When a client loads items, add their respective IDs to the client ID's mapping), or explicitly (When loading items, the client explicitly sends a registration for the queried items).
Now I'm wondering, is this a good approach? What is the common approach for a problem like this?
-
The bottleneck is likely the number of connected clients, not the amount of messages received by those clients. The real issue is how to handle scaling, when clients are connected to different servers.Rik D– Rik D09/05/2021 07:37:38Commented Sep 5, 2021 at 7:37
-
@RikD Exactly, I think the sheer number of clients might be too much at some point in time. For scaling purposes, I am working with C# and websockets through SignalR, which works with an event broker backed queue system, so distributing the event should be fine.nugetminer23– nugetminer2309/05/2021 15:57:51Commented Sep 5, 2021 at 15:57
-
Ok, but your suggested solution doesn’t limit the amount of clients. What are you trying to optimize for?Rik D– Rik D09/05/2021 16:57:27Commented Sep 5, 2021 at 16:57
-
@RikD I'm mostly interested in how to the scenario I described, in which I might want to limit the amount of sent messages to the respective clients, and whether the solution I proposed (with a cache) might be suitablenugetminer23– nugetminer2309/05/2021 18:04:37Commented Sep 5, 2021 at 18:04
-
2Imo it’s a bad idea to maintain a server side state for each connected client. It introduces complexity without solving any problems.Rik D– Rik D09/05/2021 18:53:28Commented Sep 5, 2021 at 18:53