Can I partition iceberg table in ID ranging in millions? Or Bucketing is the best option?
Am pushing 40- 50 million records from sql which has ID identity column using pyflink. And then I want to stream the data into the same table from kafka.
-
Please provide enough code so others can better understand or reproduce the problem.Community– Community Bot2025年05月09日 11:39:32 +00:00Commented May 9, 2025 at 11:39