0

Can I partition iceberg table in ID ranging in millions? Or Bucketing is the best option?

Am pushing 40- 50 million records from sql which has ID identity column using pyflink. And then I want to stream the data into the same table from kafka.

asked May 7, 2025 at 17:24
1
  • Please provide enough code so others can better understand or reproduce the problem. Commented May 9, 2025 at 11:39

0

Know someone who can answer? Share a link to this question via email, Twitter, or Facebook.

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.