You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
dataset_root: Z:\HY\results\Training dataset: Train
sub-directory: /Train num samples: 400
num total samples of Train: 400 x 1.0 (total_data_usage_ratio) = 400
num samples of Train per batch: 192 x 1.0 (batch_ratio) = 192
Total_batch_size: 192 = 192
dataset_root: Z:\HY\results\Validation dataset: /
Traceback (most recent call last):
File "C:\Users\HY\Desktop\New folder\New folder\deep-text-recognition-benchmark\train.py", line 317, in
train(opt)
File "C:\Users\HY\Desktop\New folder\New folder\deep-text-recognition-benchmark\train.py", line 35, in train
valid_dataset, valid_dataset_log = hierarchical_dataset(root=opt.valid_data, opt=opt)
File "C:\Users\HY\Desktop\New folder\New folder\deep-text-recognition-benchmark\dataset.py", line 118, in hierarchical_dataset
dataset = LmdbDataset(dirpath, opt)
File "C:\Users\HY\Desktop\New folder\New folder\deep-text-recognition-benchmark\dataset.py", line 135, in init
self.env = lmdb.open(root, max_readers=32, readonly=True, lock=False, readahead=False, meminit=False)
lmdb.Error: Z:\HY\results\Validation/Validation: The paging file is too small for this operation to complete.
The text was updated successfully, but these errors were encountered:
Hi. I wanna train the model using my own dataset. My train set has 400 samples and validation set has 100 samples.
Below are the command and output.
Command
python .\train.py --valid_data Z:\HY\results\Validation --train_data Z:\HY\results\Training --select_data Train --batch_ratio 1 --Transformation None --FeatureExtract VGG --SequenceModeling BiLSTM --Prediction CTC --data_filtering_off --sensitive --workers 0
--workers 0 is used to solve TypeError: can't pickle "Environment' objects #17 , #321
Output
dataset_root: Z:\HY\results\Training
opt.select_data: ['Train']
opt.batch_ratio: ['1']
dataset_root: Z:\HY\results\Training dataset: Train
sub-directory: /Train num samples: 400
num total samples of Train: 400 x 1.0 (total_data_usage_ratio) = 400
num samples of Train per batch: 192 x 1.0 (batch_ratio) = 192
Total_batch_size: 192 = 192
dataset_root: Z:\HY\results\Validation dataset: /
Traceback (most recent call last):
File "C:\Users\HY\Desktop\New folder\New folder\deep-text-recognition-benchmark\train.py", line 317, in
train(opt)
File "C:\Users\HY\Desktop\New folder\New folder\deep-text-recognition-benchmark\train.py", line 35, in train
valid_dataset, valid_dataset_log = hierarchical_dataset(root=opt.valid_data, opt=opt)
File "C:\Users\HY\Desktop\New folder\New folder\deep-text-recognition-benchmark\dataset.py", line 118, in hierarchical_dataset
dataset = LmdbDataset(dirpath, opt)
File "C:\Users\HY\Desktop\New folder\New folder\deep-text-recognition-benchmark\dataset.py", line 135, in init
self.env = lmdb.open(root, max_readers=32, readonly=True, lock=False, readahead=False, meminit=False)
lmdb.Error: Z:\HY\results\Validation/Validation: The paging file is too small for this operation to complete.
The text was updated successfully, but these errors were encountered: