-
Notifications
You must be signed in to change notification settings - Fork 698
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Apply transforms in PreProcessor #2467
base: release/v2.0.0
Are you sure you want to change the base?
Apply transforms in PreProcessor #2467
Conversation
Yes I also thought about the problem and what you now did, adding augmentations to the dataset, is the onyl solution to this.. |
@alexriedel1 As you've seen, I re-introduced transforms on the dataset/datamodule side, but now named as Motivation: In addition to the transforms applied by the This way, we keep the model-specific transforms on the model side, which has the advantages mentioned in my earlier comment. At the same time, the augmentation transforms on the datamodule side make it easier for the user to define their custom augmentation pipeline. The main advantage over earlier designs is that the user now only has to define the augmentations, and does not need to include the model-specific transforms such as resizing and normalization (because these are handled separately by the Concerning resizing:
Please let me know what you think, any suggestions are more than welcome :) |
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
This reverts commit 151a179.
Yes I (and most users too I guess) am happy with this! You should make clear in the docs that preprocessing transforms are stored in the model after export while augmentations are not, so people know exactly which one to use for their use case. |
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## release/v2.0.0 #2467 +/- ##
==================================================
+ Coverage 78.50% 78.62% +0.11%
==================================================
Files 303 306 +3
Lines 12955 12947 -8
==================================================
+ Hits 10170 10179 +9
+ Misses 2785 2768 -17
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the massive efforts here. Looks good to me.
from typing import Any | ||
|
||
|
||
def get_nested_attr(obj: Any, attr_path: str, default: Any | None = None) -> Any: # noqa: ANN401 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you elaborate why this is needed? Maybe add more description/example to the docstring?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I updated the description and added some examples
from torchvision.transforms.v2 import Compose, Transform | ||
|
||
|
||
def get_transforms_of_type(input_transform: Transform | None, transform_type: type[Transform]) -> list[type[Transform]]: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
def get_transforms_of_type(input_transform: Transform | None, transform_type: type[Transform]) -> list[type[Transform]]: | |
def get_transforms_by_type(transforms: Transform | None, transform_type: type[Transform]) -> list[type[Transform]]: |
or filter_transforms_by_type
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For filter_transforms_by_type
I would expect the input to be a sequence of transform objects.
This is not entirely accurate, since the function extracts the transforms from a single other transform, which could be an actual transform or a Compose.
Maybe extract_transforms_by_type
?
📝 Description
PreProcessor
. This is more in line with the other auxiliary components such as post-processing and evaluation, where the full functionality of the aux operation is contained in a single class.✨ Changes
Select what type of change your PR is:
✅ Checklist
Before you submit your pull request, please make sure you have completed the following steps:
For more information about code review checklists, see the Code Review Checklist.