I have a spark job that runs daily to load data from S3.
These data are composed of thousands of gzip files. However, in some cases, there is one or two corrupted files in S3, and it causes the whole spark_reader.load() task to fail.
An error occurred while calling o112.load. incorrect header check
Is there a way to just log these corrupted files and not break the loading ?
Current code :
def read_data(spark: SparkSession) -> DataFrame:
spark_reader = spark.read.format(json)
return spark_reader.load("s3://my_bucket/some_folder/")
asked Nov 26, 2025 at 7:17
Nakeuh
1,9334 gold badges35 silver badges75 bronze badges
-
We need you to show some code to be able to help youBasile Starynkevitch– Basile Starynkevitch2025年11月26日 07:23:53 +00:00Commented Nov 26, 2025 at 7:23
1 Answer 1
You can try the ignoreCorruptFiles option, documentation.
answered Nov 26, 2025 at 14:32
boyangeor
1,2161 gold badge9 silver badges10 bronze badges
Sign up to request clarification or add additional context in comments.
Comments
lang-py