Skip to content
/ XMem Public
forked from hkchengrex/XMem

[ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model

License

Notifications You must be signed in to change notification settings

v7labs/XMem

This branch is 3 commits ahead of, 11 commits behind hkchengrex/XMem:main.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

66e1ba4 · Sep 15, 2023

History

94 Commits
Sep 15, 2023
Sep 15, 2023
Jul 18, 2023
Jul 9, 2022
Sep 15, 2023
Sep 15, 2023
Jul 5, 2023
Sep 15, 2023
Jul 17, 2022
Sep 15, 2023
Sep 15, 2023
Sep 15, 2023
Sep 15, 2023
Jul 5, 2023
Jul 5, 2023
Jul 6, 2022
Sep 15, 2023
Jul 18, 2023
Sep 15, 2023

Repository files navigation

XMEM V7 Fork

To launch training: python -m torch.distributed.launch --master_port 25763 --nproc_per_node=2 train.py --exp_id EXPERIMENT_NAME --stage 3 --load_network saves/XMem-s012.pth

Training data

Original training data has been moved to Dodo at /data/thom/xmem_training_data You configure the dataloaders in train.py in the functions named renew_<datasetname>_loader The format for each "video" is a folder of Annotations (.png images, 2D, 0,1) and a folder of images. For in context learning you would just have 1 video.

About

[ECCV 2022] XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 95.9%
  • Cuda 2.0%
  • C++ 1.5%
  • Cython 0.4%
  • Dockerfile 0.1%
  • Shell 0.1%