Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

core/txpool/blobpool: allow gaps in blobpool #32717

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
cskiraly wants to merge 14 commits into ethereum:master
base: master
Choose a base branch
Loading
from cskiraly:blobpool-gapped-queue

Conversation

@cskiraly
Copy link
Contributor

@cskiraly cskiraly commented Sep 23, 2025
edited
Loading

Allow the blobpool to accept blobs out of nonce order

Previously, we were dropping blobs that arrived out-of-order. However, since fetch decisions are done on receiver side,
out-of-order delivery can happen, leading to inefficiencies.

This PR:

  • adds an in-memory blob tx storage, similar to the queue in the legacypool
  • a limited number of received txs can be added to this per account
  • txs waiting in the gapped queue are not processed further and not propagated further until they are unblocked by adding the previos nonce to the blobpool

The size of the in-memory storage is currently limited per account, following a slow-start logic.
An overall size limit, and a TTL is also enforced for DoS protection.

stored uint64 // Useful data size of all transactions on disk
limbo *limbo // Persistent data store for the non-finalized blobs

gapped map[common.Address][]*types.Transaction // Transactions that are currently gapped (nonce too high)
Copy link
Member

@rjl493456442 rjl493456442 Sep 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's attackable.

The adversary can send nonce-gapped blob txs from multiple addresses, causing the out-of-memory panic.

At least we should place a cap for the total amount.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added the memory cap below

delete(p.gapped, from)
}
go func() {
if err := p.add(tx); err == nil {
Copy link
Member

@rjl493456442 rjl493456442 Sep 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can remove the lock in the add, and implement a wrapper addLocked with lock management.

Internally you can use the lock-free add to insert the gapped txs here.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good point, let me check it out

@cskiraly cskiraly marked this pull request as ready for review September 25, 2025 07:26
Copy link
Contributor Author

I think replacement is not handled correctly if both txs were waiting in the reorder queue. I have to revisit that, and maybe we should add a test on it as well.

addUnderpricedMeter.Mark(1)
case errors.Is(err, core.ErrNonceTooLow):
addStaleMeter.Mark(1)
case errors.Is(err, core.ErrNonceTooHigh):
Copy link
Contributor

@healthykim healthykim Sep 25, 2025
edited
Loading

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have been thinking about what form the gapped blobpool should take in the evolution of the blobpool. In my view, we have two possible options. The first is to store a transaction in the gapped set only when "nonce too high" is the sole rejection reason. The second is to store a transaction in the gapped set when "nonce too high" is one of the rejection reasons (so there can other reasons) This PR seems to be implementing the second option, and given the current state of the blobpool, I think this is reasonable.

If cells are delivered separately from transactions in the future, first option can be beneficial in some cases, in terms of bandwidth. If a transaction in the buffer cannot be included even after the gap is resolved, then by the time the client discover it needs to be discarded (when all cells are collected and revalidation occurs through the add routine below), it will already have consumed bandwidth to fetch its cells. The first design would help reduce this overhead.

However, I am not certain this first option is necessary when we have this gapped buffer, since bandwidth can easily be wasted if someone submits gapped transactions that pass other validations and never fill the gap. I am also not sure about how often this can happen and why the submitter would do such thing.

Copy link
Contributor

@healthykim healthykim Sep 25, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

https://github.com/ethereum/go-ethereum/compare/master...healthykim:go-ethereum:bs/cell-blobpool/sparse?expand=1

In my prototype, the buffer currently stores transactions that pass all validation checks but are simply missing their cells. Still need to do some debugging, but overall structure will remain like this
It also handles two replacement case, one for the replacement of buffered tx and the other for the replacement of pooled tx

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've selected option 2 (there can be other reasons) here, since that's the option that is a pure reorder buffer, acting before any real blobpool logic. So, at least in theory, that's the least invasive version.

However, I can imagine moving to a more complex evaluation before storing the gapped transaction. The queue in the legacypool does a more complex evaluation. I think we should check that as well before deciding on future evolution.

cskiraly added 13 commits October 24, 2025 11:35
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Since we add transaction with delay, we can't have this directly in
`Add`, or we would need to set it up to handle a feed itself.
It is easier to do it directly in `add`.
The downside is that we change the granularity, creating an event for
every single blob tx.
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
The tx_fetcher uses txpool.Has to decide what to fetch. If
we already have a tx in the reorder buffer, it is better to
expose it.
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Signed-off-by: Csaba Kiraly <csaba.kiraly@gmail.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Reviewers

@rjl493456442 rjl493456442 rjl493456442 left review comments

+1 more reviewer

@healthykim healthykim healthykim left review comments

Reviewers whose approvals may not affect merge requirements

At least 1 approving review is required to merge this pull request.

Assignees

No one assigned

Labels

None yet

Projects

None yet

Milestone

No milestone

Development

Successfully merging this pull request may close these issues.

AltStyle によって変換されたページ (->オリジナル) /