Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Sign up
Appearance settings

Add max size limit to requests for bulk import #996

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
jpr5 wants to merge 1 commit into elastic:main
base: main
Choose a base branch
Loading
from jpr5:import_with_max_size

Conversation

Copy link

@jpr5 jpr5 commented Jul 19, 2021

This commit adds a new parameter max_size, in bytes, which is used to
enforce an upper limit on the overall HTTP POST size. This is useful
when trying to maximize bulk import speed by reducing roundtrips to
retrieve and send data.

This is needed for scenarios where there is no control over
Elasticsearch's maximum HTTP request payload size. For example, AWS'
elasticsearch offering has either a 10MiB or 100MiB HTTP request payload
size limit.

batch_size is good for bounding local runtime memory usage, but when
indexing large sets of big objects, it's entirely possible to hit a
service provider's underlying request size limit and biff the import
mid-run. This is even worse when force is true - then the index is
left in an incomplete state with no obvious value to adjust batch_size
down to, in order to sneak under the limit.

The max_size defaults to 10_000_000, to catch the worst-case
scenario on AWS.

This commit adds a new parameter `max_size`, in bytes, which is used to
enforce an upper limit on the overall HTTP POST size. This is useful
when trying to maximize bulk import speed by reducing roundtrips to
retrieve and send data.
This is needed for scenarios where there is no control over
Elasticsearch's maximum HTTP request payload size. For example, AWS'
elasticsearch offering has either a 10MiB or 100MiB HTTP request payload
size limit.
`batch_size` is good for bounding local runtime memory usage, but when
indexing large sets of big objects, it's entirely possible to hit a
service provider's underlying request size limit and biff the import
mid-run. This is even worse when `force` is true - then the index is
left in an incomplete state with no obvious value to adjust batch_size
down to, in order to sneak under the limit.
The `max_size` defaults to `10_000_000`, to catch the worst-case
scenario on AWS.
Copy link

cla-checker-service bot commented Jul 19, 2021
edited
Loading

💚 CLA has been signed

Copy link
Author

jpr5 commented Jul 19, 2021

Signed the agreement.

Copy link
Author

jpr5 commented Apr 21, 2023

Well, willing to look at/fix the failures but can't see the test detail failures anymore...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Reviewers
No reviews
Assignees
No one assigned
Projects
None yet
Milestone
No milestone
Development

Successfully merging this pull request may close these issues.

AltStyle によって変換されたページ (->オリジナル) /