-
Notifications
You must be signed in to change notification settings - Fork 21.4k
core/txpool/blobpool: allow gaps in blobpool #32717
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
core/txpool/blobpool/blobpool.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's attackable.
The adversary can send nonce-gapped blob txs from multiple addresses, causing the out-of-memory panic.
At least we should place a cap for the total amount.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've added the memory cap below
core/txpool/blobpool/blobpool.go
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You can remove the lock in the add, and implement a wrapper addLocked with lock management.
Internally you can use the lock-free add to insert the gapped txs here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point, let me check it out
fab71ef to
7c8cfb6
Compare
I think replacement is not handled correctly if both txs were waiting in the reorder queue. I have to revisit that, and maybe we should add a test on it as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have been thinking about what form the gapped blobpool should take in the evolution of the blobpool. In my view, we have two possible options. The first is to store a transaction in the gapped set only when "nonce too high" is the sole rejection reason. The second is to store a transaction in the gapped set when "nonce too high" is one of the rejection reasons (so there can other reasons) This PR seems to be implementing the second option, and given the current state of the blobpool, I think this is reasonable.
If cells are delivered separately from transactions in the future, first option can be beneficial in some cases, in terms of bandwidth. If a transaction in the buffer cannot be included even after the gap is resolved, then by the time the client discover it needs to be discarded (when all cells are collected and revalidation occurs through the add routine below), it will already have consumed bandwidth to fetch its cells. The first design would help reduce this overhead.
However, I am not certain this first option is necessary when we have this gapped buffer, since bandwidth can easily be wasted if someone submits gapped transactions that pass other validations and never fill the gap. I am also not sure about how often this can happen and why the submitter would do such thing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In my prototype, the buffer currently stores transactions that pass all validation checks but are simply missing their cells. Still need to do some debugging, but overall structure will remain like this
It also handles two replacement case, one for the replacement of buffered tx and the other for the replacement of pooled tx
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've selected option 2 (there can be other reasons) here, since that's the option that is a pure reorder buffer, acting before any real blobpool logic. So, at least in theory, that's the least invasive version.
However, I can imagine moving to a more complex evaluation before storing the gapped transaction. The queue in the legacypool does a more complex evaluation. I think we should check that as well before deciding on future evolution.
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Since we add transaction with delay, we can't have this directly in `Add`, or we would need to set it up to handle a feed itself. It is easier to do it directly in `add`. The downside is that we change the granularity, creating an event for every single blob tx. Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
The tx_fetcher uses txpool.Has to decide what to fetch. If we already have a tx in the reorder buffer, it is better to expose it. Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
87e2526 to
7da8e27
Compare
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Uh oh!
There was an error while loading. Please reload this page.
Allow the blobpool to accept blobs out of nonce order
Previously, we were dropping blobs that arrived out-of-order. However, since fetch decisions are done on receiver side,
out-of-order delivery can happen, leading to inefficiencies.
This PR:
The size of the in-memory storage is currently limited per account, following a slow-start logic.
An overall size limit, and a TTL is also enforced for DoS protection.