-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
feat: add waitAsRateLimit
option on http
transport
#3698
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
⚠️ No Changeset found
Latest commit: f7e6714
Merging this PR will not cause a version bump for any packages. If these changes should not result in a new version, you're good to go. If these changes should result in a version bump, you need to add a changeset.
This PR includes no changesets
When changesets are added to this PR, you'll see the packages that this PR includes changesets for and the associated semver types
Click here to learn what changesets are, and how to add one.
Click here if you're a maintainer who wants to add a changeset to this PR
@iyarsius is attempting to deploy a commit to the Wevm Team on Vercel.
A member of the Team first needs to authorize it.
waitAsRateLimit
option on http
transport (追記ここまで)
Thanks, will review!
129b947
to
9ad3509
Compare
d78cf00
to
b31cf2a
Compare
Uh oh!
There was an error while loading. Please reload this page.
This PR introduces an option to enable a batch queue system by setting
batch.waitAsRateLimit
totrue
. I was searching for a solution until I discovered #1305, which motivated me to explore ways to update the HTTP client.The core idea is to modify the
batchScheduler
by changing theshouldSplitBatch
parameter to agetBatchSize
parameter. Once the scheduler can determine the batch size, it becomes capable of queuing requests.I also updated multicall to utilize
getBatchSize
with behavior similar to the previous version.The main drawback is that if too many requests are queued, the queue could continuously grow, potentially causing delays or triggering cache limits. I have added a warning about this in the documentation.
Therefore, for users interacting with rate-limited endpoints, this option could be extremely beneficial for managing such interactions.