0

I have a spark job that runs daily to load data from S3.

These data are composed of thousands of gzip files. However, in some cases, there is one or two corrupted files in S3, and it causes the whole spark_reader.load() task to fail.

An error occurred while calling o112.load. incorrect header check

Is there a way to just log these corrupted files and not break the loading ?

Current code :

def read_data(spark: SparkSession) -> DataFrame:
 spark_reader = spark.read.format(json)
 return spark_reader.load("s3://my_bucket/some_folder/")
asked Nov 26, 2025 at 7:17
1
  • We need you to show some code to be able to help you Commented Nov 26, 2025 at 7:23

1 Answer 1

1

You can try the ignoreCorruptFiles option, documentation.

answered Nov 26, 2025 at 14:32
Sign up to request clarification or add additional context in comments.

Comments

Your Answer

Draft saved
Draft discarded

Sign up or log in

Sign up using Google
Sign up using Email and Password

Post as a guest

Required, but never shown

Post as a guest

Required, but never shown

By clicking "Post Your Answer", you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.