Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

subscribe for targeted updates, backoff needed? #372

Unanswered
kasbah asked this question in Q&A
Discussion options

Thanks for this lovely library!

I'm looking at an application where I want to subscribe to table updates and I've managed to make a proof-of-concept with subscribe. I am not doing full "logical replication" however, which is what subscribe , or at least wal_level = logical seems to be for. Should I be using listen if I only want to subscribe to a very targeted and limited number of updates?

On the listen section in the readme it says:

The connection will automatically reconnect according to a backoff reconnection pattern to not overload the database server.

Is that the case for subscribe too? Is that something I should worry about?

You must be logged in to vote

Replies: 1 comment 2 replies

Comment options

Hi @kasbah .. Sorry - this one got lost in the void.

Both will do, but there is less ceremony around using subscribe since you don't need to add triggers that call notify for you.

You can create a publication that only publishes for the limited number of tables that you'd like updates on so subscribe could also be fine. eg CREATE PUBLICATION sometables FOR users, organizations

I think it depends on your specific case, so if you can describe that we can get closer to the optimal solution?

You must be logged in to vote
2 replies
Comment options

That's alright, extremely grateful for any input you can give. Our code base is actually open source but it may be a bit too much to ask to spend time understanding all of it.

The use-case is hooking into the database of Gitea (an open source Github clone that can use Postgres as its DB). We essentially want to capture events to coordinate a background job queue with what's happening in the DB. Here's are simple wrapper client which encapsulates all the functionality we need at the moment.

As you can see we basically use (abuse?) the subscribe functionality to wait for one-time events using the once function I wrote. This means we keep creating subscriptions and removing them each time a repository is created in Gitea. I was wondering whether this might be a problem if we have 10s or 100s of one-off subscriptions being created and deleted again within a short time span. So far it's been working great though.

Comment options

Seems like we could limit the publication to specific tables but it's probably not necessary?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants

AltStyle によって変換されたページ (->オリジナル) /