We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
HPC using SLURM and a100 gpu and the docker is convert into apptainer in order to adapt into HPC
loading annotations into memory... 0:00:00.070761 creating index... index created! Loading and preparing results... DONE (t=0.01s) creating index... index created! tokenization... Traceback (most recent call last): File "src/tasks/run_caption_VidSwinBert.py", line 679, in main(args) File "src/tasks/run_caption_VidSwinBert.py", line 666, in main train(args, train_dataloader, val_dataloader, vl_transformer, tokenizer, training_saver, optimizer, scheduler) File "src/tasks/run_caption_VidSwinBert.py", line 277, in train evaluate_file = evaluate(args, val_dataloader, model, tokenizer, checkpoint_dir) File "src/tasks/run_caption_VidSwinBert.py", line 343, in evaluate result = evaluate_on_coco_caption(predict_file, caption_file, outfile=evaluate_file) File "/videocap/src/evalcap/utils_caption_evaluate.py", line 99, in evaluate_on_coco_caption cocoEval.evaluate() File "/videocap/src/evalcap/coco_caption/pycocoevalcap/eval.py", line 41, in evaluate self.tokenize() File "/videocap/src/evalcap/coco_caption/pycocoevalcap/eval.py", line 37, in tokenize self.gts = tokenizer.tokenize(gts) File "/videocap/src/evalcap/coco_caption/pycocoevalcap/tokenizer/ptbtokenizer.py", line 43, in tokenize tmp_file = tempfile.NamedTemporaryFile(delete=False, dir=path_to_jar_dirname) File "/opt/conda/lib/python3.8/tempfile.py", line 541, in NamedTemporaryFile (fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type) File "/opt/conda/lib/python3.8/tempfile.py", line 250, in _mkstemp_inner fd = _os.open(file, flags, 0o600) PermissionError: [Errno 13] Permission denied: '/videocap/src/evalcap/coco_caption/pycocoevalcap/tokenizer/tmp8215_vrp'
The text was updated successfully, but these errors were encountered:
No branches or pull requests
HPC using SLURM and a100 gpu
and the docker is convert into apptainer in order to adapt into HPC
loading annotations into memory...
0:00:00.070761
creating index...
index created!
Loading and preparing results...
DONE (t=0.01s)
creating index...
index created!
tokenization...
Traceback (most recent call last):
File "src/tasks/run_caption_VidSwinBert.py", line 679, in
main(args)
File "src/tasks/run_caption_VidSwinBert.py", line 666, in main
train(args, train_dataloader, val_dataloader, vl_transformer, tokenizer, training_saver, optimizer, scheduler)
File "src/tasks/run_caption_VidSwinBert.py", line 277, in train
evaluate_file = evaluate(args, val_dataloader, model, tokenizer, checkpoint_dir)
File "src/tasks/run_caption_VidSwinBert.py", line 343, in evaluate
result = evaluate_on_coco_caption(predict_file, caption_file, outfile=evaluate_file)
File "/videocap/src/evalcap/utils_caption_evaluate.py", line 99, in evaluate_on_coco_caption
cocoEval.evaluate()
File "/videocap/src/evalcap/coco_caption/pycocoevalcap/eval.py", line 41, in evaluate
self.tokenize()
File "/videocap/src/evalcap/coco_caption/pycocoevalcap/eval.py", line 37, in tokenize
self.gts = tokenizer.tokenize(gts)
File "/videocap/src/evalcap/coco_caption/pycocoevalcap/tokenizer/ptbtokenizer.py", line 43, in tokenize
tmp_file = tempfile.NamedTemporaryFile(delete=False, dir=path_to_jar_dirname)
File "/opt/conda/lib/python3.8/tempfile.py", line 541, in NamedTemporaryFile
(fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
File "/opt/conda/lib/python3.8/tempfile.py", line 250, in _mkstemp_inner
fd = _os.open(file, flags, 0o600)
PermissionError: [Errno 13] Permission denied: '/videocap/src/evalcap/coco_caption/pycocoevalcap/tokenizer/tmp8215_vrp'
The text was updated successfully, but these errors were encountered: