-
Notifications
You must be signed in to change notification settings - Fork 3.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[C++] Metadata related memory leak when reading parquet dataset #45287
Comments
Data can be generated with
and the code in the notebook above |
I haven't tried to look down for the precise source of memory consumption (yet?) but some quick comments already:
A quick back of the envelope calculation says that this is roughly 2 kB per column per file.
Interesting data point. That would be 4 kB per column per file, so quite a bit of additional overhead just for 128 additional characters...
I would stress that "a single row and 10 kB columns" is never going to be a good use case for Parquet, which is designed from the ground up as a columnar format. If you're storing less than e.g. 1k rows (regardless of the number of columns), the format will certainly impose a lot of overhead. Of course, we can still try to find out if there's some low-hanging fruit that would allow reducing the memory usage of metadata. |
I was expecting the metadata memory usage to be more of O(C) where C=number_columns instead of O(C * F) where C=number_columns and F=number_files? Since once a parquet file is loaded to pyarrow Table, we don't need to keep the metadata around (all files have the same scheme), but perhaps I am misunderstanding how read parquet works.
Yeah certainly feels the that there are multiple copies of the string for column name even though all file/partition has the same schema.
Yeah this is a extreme case just to show the repro. In practice the file has a couple thousands row per file.
It would be great to reduce metadata memory usage when the files being read all have the same schema since this is a quite common case I think |
Hmm, this needs clarifying a bit then :) What do the memory usage numbers you posted represent? Is it peak memory usage? Is it memory usage after loading the dataset as a Arrow table? Is the dataset object still alive at that point?
Definitely. |
How many row groups per file (or rows per row group)? It turns out much of the Parquet metadata consumption is in ColumnChunk entries. A Thrift-deserialized ColumnChunk is 640 bytes long, and there are O(CRF) ColumnChunks in your dataset, with C=number_columns, R=number_row_groups_per_file and F=number_files. |
Describe the bug, including details regarding any error messages, version, and platform.
Hi,
I have observed some memory leak when loading parquet dataset, which I think is related to metadata file.
I ran with Pyarrow 19.0 .0 Here is the code to repro
Here is the description of the dataset:
The dataset roughly looks like this
When running the code above with "time -v", it shows the memory usage is about 6G, which is significantly larger than the data loaded so I think there is some metadata related memory leak. I also noticed that the memory usage increases if I use longer column names, e.g., if I prepend a 128 char long prefix to the column names, the memory usage is about 11G.
This issue is probably the same root cause as #37630
There is script that can be used to generate the dataset for repro, but has permissioned access (due to company policy), but happy to give permission to who is looking into this:
https://github.com/twosigma/bamboo-streaming/blob/master/notebooks/generate_parquet_test_data.ipynb
Component(s)
Parquet, C++
The text was updated successfully, but these errors were encountered: