-
Notifications
You must be signed in to change notification settings - Fork 2.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spark: Relativize in-memory paths for data file and rewritable delete file locations #11525
base: main
Are you sure you want to change the base?
Spark: Relativize in-memory paths for data file and rewritable delete file locations #11525
Conversation
return deleteLoader.loadPositionDeletes( | ||
rewritableDeletes.deletesFor(path.toString(), specs), path); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll go back to explicitly returning a null index in case there are no deletes for the given path, the internal implementation of loader takes care of it regardless but it isn't obvious without reading into it.
private DeleteFile relativizeDeleteFile( | ||
DeleteFile deleteFile, Map<Integer, PartitionSpec> specs) { | ||
return FileMetadata.deleteFileBuilder(specs.get(deleteFile.specId())) | ||
.copy(deleteFile) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we'd have a slight perf hit by having to do this copy (particularly the copying the partition data in memory). I'll see if there's any relevant benchmarking we can leverage to measure if it's really significant or not.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe there's a way to extend planning APIs so that we return all paths already relativized but that seems like it would be a bigger change, and it's not obvious we should do that until there's evidence that this copy is expensive.
cc @singhpk234 |
Just as a gut comment, if we just compressed them shouldn't we get almost all the benefits we are looking for? They are just a bunch of strings so the binary representation of all of them should be pretty compressible. |
It's true the broadcast would be compressed by default via |
Won't it all be in memory anyway? Java should do string interning with prefix? |
This is a follow up to https://github.com/apache/iceberg/pull/11273/files#
Instead of broadcasting a map with absolute paths for data files and delete files to executors, we could shrink the memory footprint by relativizing the in-memory mapping, and then just prior to lookup on executors, reconstruct the absolute path as for the relevant delete files.
There are a few ways to go about relativization, in the current implementation I just did the simplest thing which was to relativize to the table location. There are more sophisticated things that could be done to save even more memory consumer from paths such as relativize according to the data file location (requires surfacing more details from LocationProvider), find the longest common prefix between all data/delete files in the rewritable deletes (requires a double pass over tasks, once to identify the longest common prefix via smallest/largest lexicographical strings, and then another to actually reconstruct the delete files). Patricia tries are another possibility though the serialized representation seems to take about the same amount of memory, not sure why that's the case.
I'm also working on identifying if using spark bytestobytes offheap map will save us even more memory but in the mean time thought it made sense to at least get this improvement in the interim. This is all internal, so we can always remove it down the line if something better comes along.