-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Hi, I have some question about Fabricated branch #2
Comments
We first input all the obj embedding to combine with each verb, then remove infeasible HOIs in each step. We think this can balance the object distribution for each verb. I suggest this implementation of FCL (https://github.com/zhihou7/FCL_VCOCO), which only contains FCL code and is thus clear |
thank you for your reply, Is the implementation method the same on the V-COCO and HICO data sets? I am not very familiar with tensorflow, and I have not found the relevant code, on the HICO data set |
Yes, the core part of FCL is similar. But on V-COCO, we do not use verb auxiliary loss since there are only 24 verbs. In this repository (https://github.com/zhihou7/FCL_VCOCO), we include code of FCL as a unique commit: zhihou7/FCL_VCOCO@34ada5e. Code of FCL on HICO data is https://github.com/zhihou7/HOI-CL/blob/master/lib/networks/Fabricator.py |
@zhihou7 hi,Can this model be run based on resnet50? In download_dataset.sh,there is only a download link for resnet50, but in the FCL project, the model is trained based on resnet101. |
the url (https://drive.google.com/file/d/0B1_fAEgxdnvJR1N3c1FYRGo1S1U/view) in download_dataset.sh is the weights for resnet101. I also uncomment the line in Train_FCL_HICO.py with resnet50 weights right now. you can update the code. you can change the model name FCL_union_l2_zs_s0_vloss2_varl_gan_dax_rands_rew_aug5_x5new_res101 to FCL_union_l2_zs_s0_vloss2_varl_gan_dax_rands_rew_aug5_x5new_res50 for use resnet50 backbone. However, I do not test resnet50 in this repository. regards |
@zhihou7 Sorry to bother you again, in Fabricator.py var_fabricate_gen_lite(), Why is this function(convert_emb_feats) called twice?Looking forward to your reply |
You are welcome. The first one should be comment (you can find I do not use the result returned by var_fabricate_gen_lite). in var_fabricate_gen, we call the convert_emb_feats twice (in fact, we also call convert_emb_feats for fine-tuning, then tripple)like this:
The implementation of var_fabricate_gen aims to regularize fake_objects and real objects (e.g. cosine loss, contrastive loss). We do not successfully fuse FCL and VCL in a single model, i.e. do not improve the result a lot. However, our preliminary experiment find with an additional loss can achieve better results (e.g. contrastive loss), especially when the batch size is bigger. Hope this information helpful for you. Regards, |
Yes. You can also use a word embedding to generate it. But, we find the result of randomly initialized embedding is slightly better and simple. |
In every step training, do you input all the obj embedding and vert into the .Fabricate model, and then filter out the wrong combination? Or do you only take one obj at a time?
The text was updated successfully, but these errors were encountered: