Replies: 1 comment
-
Thanks for raising this issue! Moving the discussion to #3220. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I am getting this error when using a sequence combiner. Two of my features are a sequence feature. Even though I have the maximum sequence length set to be the same I am getting this error.
ValueError: The sequence length of the input feature tm_jf_sen_0 is 42 and is different from the sequence length of the main sequence feature equity_seq which is 39. Shape of tm_jf_sen_0: torch.Size([2, 42, 256]), shape of tm_jf_sen_0: torch.Size([2, 39, 256]). Sequence lengths of all sequential features must be the same in order to be concatenated by the sequence concat combiner. Try to impose the same max sequence length as a preprocessing parameter to both features or to reduce the output of tm_jf_sen_0.
You can see my config here
2023-03-07 13:32:14 | INFO | { 'combiner': {'type': 'sequence'}, 'input_features': [ { 'encoder': { 'embedding_size': 64, 'max_sequence_length': 64, 'num_layers': 3, 'reduce_output': None, 'type': 'rnn'}, 'name': 'tm_jf_sen_0', 'preprocessing': {'max_sequence_length': 64}, 'tied': None, 'type': 'sequence'}, { 'encoder': { 'embedding_size': 64, 'max_sequence_length': 64, 'num_layers': 3, 'reduce_output': None, 'type': 'rnn'}, 'name': 'tm_jf_sen_1', 'preprocessing': {'max_sequence_length': 64}, 'tied': 'tm_jf_sen_0', 'type': 'sequence'}, { 'encoder': { 'embedding_size': 64, 'max_sequence_length': 64, 'num_layers': 3, 'reduce_output': None, 'type': 'rnn'}, 'name': 'tm_jf_sen_2', 'preprocessing': {'max_sequence_length': 64}, 'tied': 'tm_jf_sen_0', 'type': 'sequence'}, { 'encoder': { 'embedding_size': 64, 'max_sequence_length': 64, 'num_layers': 3, 'reduce_output': None, 'type': 'rnn'}, 'name': 'equity_seq', 'preprocessing': {'max_sequence_length': 64}, 'type': 'sequence'}, { 'encoder': { 'output_size': 1, 'type': 'passthrough'}, 'name': 'funding_total', 'type': 'number'}, { 'encoder': { 'output_size': 1, 'type': 'passthrough'}, 'name': 'funding_latest_amount', 'type': 'number'}, { 'encoder': { 'output_size': 1, 'type': 'passthrough'}, 'name': 'funding_investor_number', 'type': 'number'}, { 'encoder': { 'output_size': 1, 'type': 'passthrough'}, 'name': 'funding_investor_score_count', 'type': 'number'}, { 'encoder': { 'output_size': 1, 'type': 'passthrough'}, 'name': 'funding_investor_score_mean', 'type': 'number'}, { 'encoder': { 'output_size': 1, 'type': 'passthrough'}, 'name': 'funding_investor_score_std', 'type': 'number'}, { 'encoder': { 'output_size': 1, 'type': 'passthrough'}, 'name': 'funding_investor_score_50%', 'type': 'number'}, { 'encoder': { 'output_size': 1, 'type': 'passthrough'}, 'name': 'funding_investor_score_max', 'type': 'number'}], 'ludwig_version': '0.7.2', 'output_features': [ { 'clip': None, 'decoder': { 'bias_initializer': 'zeros', 'fc_activation': 'relu', 'fc_bias_initializer': 'zeros', 'fc_dropout': 0, 'fc_layers': None, 'fc_norm': None, 'fc_norm_params': None, 'fc_output_size': 256, 'fc_use_bias': True, 'fc_weights_initializer': 'xavier_uniform', 'input_size': None, 'num_fc_layers': 2, 'type': 'regressor', 'use_bias': True, 'weights_initializer': 'xavier_uniform'}, 'dependencies': ['assigned_binary_label'], 'input_size': None, 'loss': { 'class_weights': None, 'type': 'mean_squared_error', 'weight': 1}, 'name': 'assigned_label', 'num_classes': None, 'preprocessing': { 'computed_fill_value': 0, 'fill_value': 0, 'missing_value_strategy': 'drop_row', 'normalization': None}, 'reduce_dependencies': 'sum', 'reduce_input': 'sum', 'type': 'number'}, { 'calibration': False, 'column': 'assigned_binary_label', 'decoder': { 'bias_initializer': 'zeros', 'fc_activation': 'relu', 'fc_bias_initializer': 'zeros', 'fc_dropout': 0, 'fc_layers': None, 'fc_norm': None, 'fc_norm_params': None, 'fc_output_size': 256, 'fc_use_bias': True, 'fc_weights_initializer': 'xavier_uniform', 'input_size': None, 'num_fc_layers': 2, 'type': 'regressor', 'use_bias': True, 'weights_initializer': 'xavier_uniform'}, 'dependencies': [], 'input_size': None, 'loss': { 'class_weights': None, 'confidence_penalty': 0, 'positive_class_weight': 10, 'robust_lambda': 0, 'type': 'binary_weighted_cross_entropy', 'weight': 2}, 'name': 'assigned_binary_label', 'num_classes': None, 'preprocessing': { 'computed_fill_value': None, 'fallback_true_label': None, 'fill_value': None, 'missing_value_strategy': 'drop_row'}, 'reduce_dependencies': 'sum', 'reduce_input': 'sum', 'threshold': 0.5, 'type': 'binary'}], 'preprocessing': {'undersample_majority': None}, 'trainer': { 'early_stop': 20, 'epochs': 150, 'evaluate_training_set': True, 'learning_rate': 0.0001, 'optimizer': {'type': 'adam'}, 'validation_field': 'assigned_label', 'validation_metric': 'koble_f1'}}
Beta Was this translation helpful? Give feedback.
All reactions