Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding capability to start a training from model checkpoint instead of doing it from scratch #297

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

karolzak
Copy link

@karolzak karolzak commented Feb 1, 2024

Small change introducing the option to provide a path (through location config) for model checkpoint to be used to load weights before starting a new training. I used this with success for finetuning LaMa model to my custom dataset.

CC: @senya-ashukha @cohimame

@Abbsalehi
Copy link

Abbsalehi commented Mar 1, 2024

@karolzak thanks for your good work, I want to fine-tune the model. But I could not find how to do it? could you please let me know how to use your work? thanks

@karolzak
Copy link
Author

karolzak commented Mar 4, 2024

@karolzak thanks for your good work, I want to fine-tune the model. But I could not find how to do it? could you please let me know how to use your work? thanks

Thanks @Abbsalehi !
For the preparation of the training just follow all the standard steps in the root README doc. To use model fine-tuning rather than training from scratch you need to either create a new config or modify one of the existing configs placed under configs/training/location (depending on which one you are using).
image

More specifically you need to add a variable like below:
load_checkpoint_path: /home/user/lama/big-lama/models/best.ckpt

In my trials, I created a new config called article_dataset.yaml and placed it under configs/training/location and its content looked like this:

data_root_dir: /home/azureuser/localfiles/image-inpainting/datasets/article-dataset/processed/
out_root_dir: /home/azureuser/localfiles/lama/experiments/
tb_dir: /home/azureuser/localfiles/lama/tb_logs/
load_checkpoint_path: /home/azureuser/localfiles/lama/experiments/azureuser_2024-02-01_12-17-01_train_big-lama_/models/epoch=7-step=2559.ckpt

After you create your new config you can run something like this to kick off the training:

python3 bin/train.py -cn big-lama location=article_dataset.yaml data.batch_size=10

When this new variable is present in the config, the training script will try to instantiate the model from a previously trained model checkpoint. In my trials I just used big-lama pretrained model which can be downloaded from google drive of LaMa authors.
Let me know if something is unclear.

@Abbsalehi
Copy link

Abbsalehi commented Mar 7, 2024

@karolzak thanks a lot for your helpful response. In the Readme file, it says to provide the below directories, how many images did you put in these folders as I do not have many images?

Readme:

You need to prepare the following image folders:

$ ls my_dataset
train
val_source # 2000 or more images
visual_test_source # 100 or more images
eval_source # 2000 or more images

@karolzak
Copy link
Author

karolzak commented Mar 8, 2024

@karolzak thanks a lot for your helpful response. In the Readme file, it says to provide the below directories, how many images did you put in these folders as I do not have many images?

Readme:

You need to prepare the following image folders:

$ ls my_dataset train val_source # 2000 or more images visual_test_source # 100 or more images eval_source # 2000 or more images

I followed the recommendation from the docs but I'm not sure if this is necessarily needed. I'm not aware if this is coming from some hardcoded checks or is it more as a "for best performance" kind of suggestion. I would suggest to try with however many images you have and see what happens

@Abbsalehi
Copy link

Abbsalehi commented Mar 13, 2024

Thanks @karolzak, I could start training the model. However, I am wondering if is it possible to use multi-GPU to accelerate the process.

@Abbsalehi
Copy link

Abbsalehi commented Mar 19, 2024

@karolzak could you please help me to understand the below table from the result of one epoch validation? I do not understand "std" is calculated from which metric? Why some values are NaN? and what are the percentage ranges in the first column?

              fid     lpips                ssim           ssim_fid100_f1
             mean      mean       std      mean       std           mean
0-10%    7.132144  0.025758  0.015533  0.975447  0.019605            NaN
10-20%  22.423028  0.081735  0.020867  0.920162  0.035067            NaN
20-30%  38.135151  0.138476  0.024617  0.863151  0.047236            NaN
30-40%  56.557434  0.196688  0.030477  0.810011  0.065147            NaN
40-50%  76.543753  0.260003  0.037845  0.748839  0.081490            NaN
total   14.605970  0.141385  0.084988  0.862623  0.094825        0.85776

@bekhzod-olimov
Copy link

Hey guys, @karolzak, @Abbsalehi! Could you please provide a link for the "e-commerce" dataset in described in the blog? The provided link in Kaggle does not seem to exist anymore :(

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants