See similar git issues here:--
tensorflow/ecosystem#61 (comment)
tensorflow/ecosystem#61
tensorflow/ecosystem#106
This how I'm writing a PySpark dataframe to tf-records to an S3 bucket:---
s3_path = "s3://Shuks/dataframe_tf_records"
df.write.mode("overwrite").format("tfrecord").option("recordType", "Example").save(s3_path)
This creates a new key/"directory" on S3 with the following path : s3://Shuks/dataframe_tf_records/
And under this directory are all the tf-records.
How do I specify compression type during conversion?