You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem or challenge? Please describe what you are trying to do.
We have been using at least two parquet writers that both utilize the low-level APIs provided by the parquet crate (e.g. SerializedFileWriter). One of the writers (ArrowWriter) is provided as part of the parquet crate, whereas the other parallel writer (datafusion's ParquetSink) is not. However, in both cases we later attempt to read these files using the parquet crate's readers.
The challenge is that we keep encountering unexpected differences in the parquet written by these two writers. The most recent example is that the arrow schema is missing when using datafusion's parallel writer, whereas it is included in the ArrowWriter on parquet write.
Describe the solution you'd like
Can we update the lower level APIs (in the parquet crate) to make it easier for users to create their own parquet writers -- without encountering surprise differences from the behavior of parquet's ArrowWriter? Provide better documentation? Provide guidance for testing when creating your own parquet writer?
Describe alternatives you've considered
Alternatively, we could consider this problem as the responsibility for the users that create their own writers. We already plan to file a datafusion ticket proposing that we need integration tests to ensure byte equivalency in the output parquet (vs parquet written by ArrowWriter).
Additional context
This is not the first time that we have discovered differences in the output parquet from the ArrowWriter vs datafusion's ParquetSink. However, we are unclear on the best way to divide responsibilities (for more testing) vs API design.
The text was updated successfully, but these errors were encountered:
Is your feature request related to a
problem orchallenge? Please describe what you are trying to do.We have been using at least two parquet writers that both utilize the low-level APIs provided by the parquet crate (e.g. SerializedFileWriter). One of the writers (ArrowWriter) is provided as part of the parquet crate, whereas the other parallel writer (datafusion's ParquetSink) is not. However, in both cases we later attempt to read these files using the parquet crate's readers.
The challenge is that we keep encountering unexpected differences in the parquet written by these two writers. The most recent example is that the arrow schema is missing when using datafusion's parallel writer, whereas it is included in the ArrowWriter on parquet write.
Describe the solution you'd like
Can we update the lower level APIs (in the parquet crate) to make it easier for users to create their own parquet writers -- without encountering surprise differences from the behavior of parquet's ArrowWriter? Provide better documentation? Provide guidance for testing when creating your own parquet writer?
Describe alternatives you've considered
Alternatively, we could consider this problem as the responsibility for the users that create their own writers. We already plan to file a datafusion ticket proposing that we need integration tests to ensure byte equivalency in the output parquet (vs parquet written by ArrowWriter).
Additional context
This is not the first time that we have discovered differences in the output parquet from the ArrowWriter vs datafusion's ParquetSink. However, we are unclear on the best way to divide responsibilities (for more testing) vs API design.
The text was updated successfully, but these errors were encountered: