Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Apply transforms in PreProcessor #2467

Open
wants to merge 23 commits into
base: release/v2.0.0
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 21 commits
Commits
Show all changes
23 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
3 changes: 0 additions & 3 deletions configs/model/cfa.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,6 @@ model:
num_hard_negative_features: 3
radius: 1.0e-05

metrics:
pixel: AUROC

trainer:
max_epochs: 30
callbacks:
Expand Down
4 changes: 0 additions & 4 deletions configs/model/cflow.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -15,10 +15,6 @@ model:
permute_soft: false
lr: 0.0001

metrics:
pixel:
- AUROC

trainer:
max_epochs: 50
callbacks:
Expand Down
4 changes: 0 additions & 4 deletions configs/model/csflow.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,6 @@ model:
clamp: 3
num_channels: 3

metrics:
pixel:
- AUROC

trainer:
max_epochs: 240
callbacks:
Expand Down
4 changes: 0 additions & 4 deletions configs/model/draem.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -6,10 +6,6 @@ model:
sspcab_lambda: 0.1
anomaly_source_path: null

metrics:
pixel:
- AUROC

trainer:
max_epochs: 700
callbacks:
Expand Down
4 changes: 0 additions & 4 deletions configs/model/dsr.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,10 +4,6 @@ model:
latent_anomaly_strength: 0.2
upsampling_train_ratio: 0.7

metrics:
pixel:
- AUROC

# PL Trainer Args. Don't add extra parameter here.
trainer:
max_epochs: 700
4 changes: 0 additions & 4 deletions configs/model/efficient_ad.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,10 +8,6 @@ model:
padding: false
pad_maps: true

metrics:
pixel:
- AUROC

trainer:
max_epochs: 1000
max_steps: 70000
4 changes: 0 additions & 4 deletions configs/model/fastflow.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,6 @@ model:
conv3x3_only: false
hidden_ratio: 1.0

metrics:
pixel:
- AUROC

trainer:
max_epochs: 500
callbacks:
Expand Down
3 changes: 0 additions & 3 deletions configs/model/padim.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,3 @@ model:
backbone: resnet18
pre_trained: true
n_features: null

metrics:
pixel: AUROC
4 changes: 0 additions & 4 deletions configs/model/reverse_distillation.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -9,10 +9,6 @@ model:
anomaly_map_mode: ADD
pre_trained: true

metrics:
pixel:
- AUROC

trainer:
callbacks:
- class_path: lightning.pytorch.callbacks.EarlyStopping
Expand Down
4 changes: 0 additions & 4 deletions configs/model/stfpm.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,6 @@ model:
- layer2
- layer3

metrics:
pixel:
- AUROC

trainer:
max_epochs: 100
callbacks:
Expand Down
4 changes: 0 additions & 4 deletions configs/model/uflow.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -7,10 +7,6 @@ model:
affine_subnet_channels_ratio: 1.0
backbone: mcait # official: mcait, other extractors tested: resnet18, wide_resnet50_2. Could use others...

metrics:
pixel:
- AUROC

# PL Trainer Args. Don't add extra parameter here.
trainer:
max_epochs: 200
Expand Down
31 changes: 4 additions & 27 deletions notebooks/100_datamodules/101_btech.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -61,18 +61,16 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# flake8: noqa\n",
"import numpy as np\n",
"from PIL import Image\n",
"from torchvision.transforms.v2 import Resize\n",
"from torchvision.transforms.v2.functional import to_pil_image\n",
"\n",
"from anomalib.data import BTech, BTechDataset\n",
"from anomalib import TaskType"
"from anomalib.data import BTech, BTechDataset"
]
},
{
Expand All @@ -99,7 +97,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -203,25 +201,6 @@
"BTechDataset??"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We can add some transforms that will be applied to the images using torchvision. Let's add a transform that resizes the \n",
"input image to 256x256 pixels."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"image_size = (256, 256)\n",
"transform = Resize(image_size, antialias=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
Expand All @@ -240,7 +219,6 @@
"btech_dataset_train = BTechDataset(\n",
" root=dataset_root,\n",
" category=\"01\",\n",
" transform=transform,\n",
" split=\"train\",\n",
")\n",
"print(len(btech_dataset_train))\n",
Expand Down Expand Up @@ -268,7 +246,6 @@
"btech_dataset_test = BTechDataset(\n",
" root=dataset_root,\n",
" category=\"01\",\n",
" transform=transform,\n",
" split=\"test\",\n",
")\n",
"print(len(btech_dataset_test))\n",
Expand Down
28 changes: 3 additions & 25 deletions notebooks/100_datamodules/102_mvtec.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -23,14 +23,13 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
"# flake8: noqa\n",
"import numpy as np\n",
"from PIL import Image\n",
"from torchvision.transforms.v2 import Resize\n",
"from torchvision.transforms.v2.functional import to_pil_image\n",
"\n",
"from anomalib.data import MVTec, MVTecDataset"
Expand All @@ -48,7 +47,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -76,7 +75,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -180,25 +179,6 @@
"MVTecDataset??"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We can add some transforms that will be applied to the images using torchvision. Let's add a transform that resizes the \n",
"input image to 256x256 pixels."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"image_size = (256, 256)\n",
"transform = Resize(image_size, antialias=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
Expand All @@ -217,7 +197,6 @@
"mvtec_dataset_train = MVTecDataset(\n",
" root=dataset_root,\n",
" category=\"bottle\",\n",
" transform=transform,\n",
" split=\"train\",\n",
")\n",
"print(len(mvtec_dataset_train))\n",
Expand Down Expand Up @@ -245,7 +224,6 @@
"mvtec_dataset_test = MVTecDataset(\n",
" root=dataset_root,\n",
" category=\"bottle\",\n",
" transform=transform,\n",
" split=\"test\",\n",
")\n",
"print(len(mvtec_dataset_test))\n",
Expand Down
28 changes: 2 additions & 26 deletions notebooks/100_datamodules/103_folder.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -33,7 +33,7 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 2,
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -63,14 +63,13 @@
},
{
"cell_type": "code",
"execution_count": null,
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"# flake8: noqa\n",
"import numpy as np\n",
"from PIL import Image\n",
"from torchvision.transforms.v2 import Resize\n",
"from torchvision.transforms.v2.functional import to_pil_image\n",
"\n",
"from anomalib.data import Folder, FolderDataset"
Expand Down Expand Up @@ -173,25 +172,6 @@
"FolderDataset??"
]
},
{
"attachments": {},
"cell_type": "markdown",
"metadata": {},
"source": [
"We can add some transforms that will be applied to the images using torchvision. Let's add a transform that resizes the \n",
"input image to 256x256 pixels."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"image_size = (256, 256)\n",
"transform = Resize(image_size, antialias=True)"
]
},
{
"attachments": {},
"cell_type": "markdown",
Expand All @@ -211,7 +191,6 @@
" normal_dir=dataset_root / \"good\",\n",
" abnormal_dir=dataset_root / \"crack\",\n",
" split=\"train\",\n",
" transform=transform,\n",
")\n",
"print(len(folder_dataset_train))\n",
"sample = folder_dataset_train[0]\n",
Expand Down Expand Up @@ -241,7 +220,6 @@
" normal_dir=dataset_root / \"good\",\n",
" abnormal_dir=dataset_root / \"crack\",\n",
" split=\"test\",\n",
" transform=transform,\n",
")\n",
"print(len(folder_dataset_test))\n",
"sample = folder_dataset_test[0]\n",
Expand Down Expand Up @@ -270,7 +248,6 @@
" normal_dir=dataset_root / \"good\",\n",
" abnormal_dir=dataset_root / \"crack\",\n",
" split=\"train\",\n",
" transform=transform,\n",
" mask_dir=dataset_root / \"mask\" / \"crack\",\n",
")\n",
"print(len(folder_dataset_segmentation_train))\n",
Expand All @@ -290,7 +267,6 @@
" normal_dir=dataset_root / \"good\",\n",
" abnormal_dir=dataset_root / \"crack\",\n",
" split=\"test\",\n",
" transform=transform,\n",
" mask_dir=dataset_root / \"mask\" / \"crack\",\n",
")\n",
"print(len(folder_dataset_segmentation_test))\n",
Expand Down
Loading