I have a Flink 1.14 backfill batch job large enough that it's very resource-intensive and hard to run to completion without spurious failures (network glitches, node scheduling failures, disk capacity for intermediate state, etc.)
It would make sense to me to run the job in manual increments to limit the execution time and resource requirements, something like:
inputs0 -> 'job -increment inputs0' -> state0
inputs1 -> 'job -increment inputs1 state0' -> state1
inputs2 -> 'job -increment inputs2 state1' -> state2
...
inputsY -> 'job -finalize inputsY stateX' -> stateY, outputs
where stateN is repeatedly looped back into the next increment, until the very
last one, where we finalize to produce actual outputs.
Much of the logic is shared with a streaming version of the same job, and there's a significant number of stateful keyed operators in the pipeline.
This state "spilling" reminds me of save-/checkpoints, which are not supported in batch jobs.
But it does feel like Flink would have enough mechanics available to be able to extract all operator states to a serializable form, and rehydrate them on the next run, even if there's no orchestrated API for it right now.
I have some working experiments with side outputs wired into operators, populated from an end-of-batch timer, but it's very convoluted, and I don't have a clear idea of how to rehydrate the operator state on startup (we use this to hand over from batch to streaming, by reading the spilled state and generating a savepoint, but that's not useful for batch-to-batch handovers).
Are there any strings to start pulling on, or is this simply not a good idea?