Skip to content

Conversation

@fkiraly
Copy link
Collaborator

@fkiraly fkiraly commented Jul 25, 2025

Deduplicates utilities that were copy-pasted to avoid a scikit-base dependency early on.

Since the v1 testing framework will require scikit-base, it makes sense to now attempt a deduplication.

Adds scikit-base as a core dependency.

@fkiraly fkiraly added the enhancement New feature or request label Jul 25, 2025
@fkiraly fkiraly added the maintenance Continuous integration, unit testing & package distribution label Jul 25, 2025
@fkiraly fkiraly requested a review from yarnabrina as a code owner July 25, 2025 20:07
@fkiraly fkiraly marked this pull request as draft July 25, 2025 20:07
@fkiraly fkiraly marked this pull request as ready for review August 9, 2025 16:30
Copy link
Member

@phoeenniixx phoeenniixx left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there are still some places left where _get_installed_packages are left in the tests folder. Like in test_nbeats_kan and test_metrics

@fkiraly
Copy link
Collaborator Author

fkiraly commented Jan 11, 2026

yes, you are right - I forgot to commit some files

@fkiraly
Copy link
Collaborator Author

fkiraly commented Jan 11, 2026

@phoeenniixx, could you kindly run pre-commit on the files failing linting? Something is not quite working for me locally, the files do not get changed. Not sure what I might be doing wrong. Did you change something recently, e.g., target python version?

(I think my virus scanner just put my precommit env into quarantine)

@fkiraly
Copy link
Collaborator Author

fkiraly commented Jan 11, 2026

the rest should be fixed though

@phoeenniixx
Copy link
Member

Did you change something recently, e.g., target python version?

Actually yes. I changed the target python version from 3.9 - 3.10. As mentioned in this comment #1856 (comment). I also looked at sktime pyproject, there also the target version was py310. I thought this would not cause an issue?

@phoeenniixx
Copy link
Member

pre-commit seems to run fine in my env:

 [~/pytorch-forecasting]
 aryan   skbase-dep  pre-commit run --all-files                                  
trim trailing whitespace.................................................Passed
fix end of files.........................................................Passed
check yaml...............................................................Passed
check python ast.........................................................Passed
ruff.....................................................................Failed
- hook id: ruff
- exit code: 1
- files were modified by this hook

pytorch_forecasting/data/_tslib_data_module.py:23:14: UP007 Use `X | Y` for type annotations
   |
21 | from pytorch_forecasting.utils._coerce import _coerce_to_dict
22 | 
23 | NORMALIZER = Union[TorchNormalizer, EncoderNormalizer, NaNLabelEncoder]
   |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ UP007
   |
   = help: Convert to `X | Y`

pytorch_forecasting/data/data_module.py:26:14: UP007 Use `X | Y` for type annotations
   |
24 | from pytorch_forecasting.utils._coerce import _coerce_to_dict
25 | 
26 | NORMALIZER = Union[TorchNormalizer, NaNLabelEncoder, EncoderNormalizer]
   |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ UP007
   |
   = help: Convert to `X | Y`

pytorch_forecasting/data/data_module.py:96:89: E501 Line too long (99 > 88)
   |
94 |         add_target_scales: bool = False,
95 |         add_encoder_length: bool | str = "auto",
96 |         target_normalizer: NORMALIZER | str | list[NORMALIZER] | tuple[NORMALIZER] | None = "auto",
   |                                                                                         ^^^^^^^^^^^ E501
97 |         categorical_encoders: dict[str, NaNLabelEncoder] | None = None,
98 |         scalers: dict[str, StandardScaler | RobustScaler | TorchNormalizer | EncoderNormalizer] | None = None,
   |

pytorch_forecasting/data/data_module.py:98:89: E501 Line too long (110 > 88)
    |
 96 |         target_normalizer: NORMALIZER | str | list[NORMALIZER] | tuple[NORMALIZER] | None = "auto",
 97 |         categorical_encoders: dict[str, NaNLabelEncoder] | None = None,
 98 |         scalers: dict[str, StandardScaler | RobustScaler | TorchNormalizer | EncoderNormalizer] | None = None,
    |                                                                                         ^^^^^^^^^^^^^^^^^^^^^^ E501
 99 |         randomize_length: None | tuple[float, float] | bool = False,
100 |         batch_size: int = 32,
    |

pytorch_forecasting/data/encoders.py:1418:89: E501 Line too long (94 > 88)
     |
1416 |         return_norm: bool = False,
1417 |         target_scale: list[torch.Tensor] = None,
1418 |     ) -> list[tuple[np.ndarray | torch.Tensor, np.ndarray]] | list[np.ndarray | torch.Tensor]:
     |                                                                                         ^^^^^^ E501
1419 |         """
1420 |         Scale input data.
     |

pytorch_forecasting/data/timeseries/_timeseries.py:150:14: UP007 Use `X | Y` for type annotations
    |
150 | NORMALIZER = Union[TorchNormalizer, NaNLabelEncoder, EncoderNormalizer]
    |              ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ UP007
151 | 
152 | Columns = list[str]
    |
    = help: Convert to `X | Y`

pytorch_forecasting/data/timeseries/_timeseries.py:466:89: E501 Line too long (99 > 88)
    |
464 |         add_target_scales: bool = False,
465 |         add_encoder_length: bool | str = "auto",
466 |         target_normalizer: NORMALIZER | str | list[NORMALIZER] | tuple[NORMALIZER] | None = "auto",
    |                                                                                         ^^^^^^^^^^^ E501
467 |         categorical_encoders: dict[str, NaNLabelEncoder] | None = None,
468 |         scalers: dict[str, StandardScaler | RobustScaler | TorchNormalizer | EncoderNormalizer] | None = None,
    |

pytorch_forecasting/data/timeseries/_timeseries.py:468:89: E501 Line too long (110 > 88)
    |
466 |         target_normalizer: NORMALIZER | str | list[NORMALIZER] | tuple[NORMALIZER] | None = "auto",
467 |         categorical_encoders: dict[str, NaNLabelEncoder] | None = None,
468 |         scalers: dict[str, StandardScaler | RobustScaler | TorchNormalizer | EncoderNormalizer] | None = None,
    |                                                                                         ^^^^^^^^^^^^^^^^^^^^^^ E501
469 |         randomize_length: None | tuple[float, float] | bool = False,
470 |         predict_mode: bool = False,
    |

pytorch_forecasting/models/nn/embeddings.py:40:89: E501 Line too long (105 > 88)
   |
38 |     def __init__(
39 |         self,
40 |         embedding_sizes: dict[str, tuple[int, int]] | dict[str, int] | list[int] | list[tuple[int, int]],
   |                                                                                         ^^^^^^^^^^^^^^^^^ E501
41 |         x_categoricals: list[str] = None,
42 |         categorical_groups: dict[str, list[str]] | None = None,
   |

pytorch_forecasting/models/nn/rnn.py:12:15: UP007 Use `X | Y` for type annotations
   |
10 | from torch.nn.utils import rnn
11 | 
12 | HiddenState = Union[tuple[torch.Tensor, torch.Tensor], torch.Tensor]
   |               ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ UP007
   |
   = help: Convert to `X | Y`

pytorch_forecasting/models/xlstm/_xlstm.py:114:89: E501 Line too long (121 > 88)
    |
112 |         self,
113 |         x: dict[str, torch.Tensor],
114 |         hidden_states: tuple[torch.Tensor, torch.Tensor] | tuple[torch.Tensor, torch.Tensor, torch.Tensor] | None = None,
    |                                                                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E501
115 |     ) -> dict[str, torch.Tensor]:
116 |         """Forward Pass for the model."""
    |

pytorch_forecasting/utils/_utils.py:442:89: E501 Line too long (132 > 88)
    |
441 | def move_to_device(
442 |     x: dict[str, torch.Tensor | list[torch.Tensor] | tuple[torch.Tensor]] | torch.Tensor | list[torch.Tensor] | tuple[torch.Tensor],
    |                                                                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E501
443 |     device: str | torch.DeviceObjType,
444 | ) -> dict[str, torch.Tensor | list[torch.Tensor] | tuple[torch.Tensor]] | torch.Tensor | list[torch.Tensor] | tuple[torch.Tensor]:
    |

pytorch_forecasting/utils/_utils.py:444:89: E501 Line too long (130 > 88)
    |
442 |     x: dict[str, torch.Tensor | list[torch.Tensor] | tuple[torch.Tensor]] | torch.Tensor | list[torch.Tensor] | tuple[torch.Tensor],
443 |     device: str | torch.DeviceObjType,
444 | ) -> dict[str, torch.Tensor | list[torch.Tensor] | tuple[torch.Tensor]] | torch.Tensor | list[torch.Tensor] | tuple[torch.Tensor]:
    |                                                                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E501
445 |     """
446 |     Move object to device.
    |

pytorch_forecasting/utils/_utils.py:479:89: E501 Line too long (132 > 88)
    |
478 | def detach(
479 |     x: dict[str, torch.Tensor | list[torch.Tensor] | tuple[torch.Tensor]] | torch.Tensor | list[torch.Tensor] | tuple[torch.Tensor],
    |                                                                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E501
480 | ) -> dict[str, torch.Tensor | list[torch.Tensor] | tuple[torch.Tensor]] | torch.Tensor | list[torch.Tensor] | tuple[torch.Tensor]:
481 |     """
    |

pytorch_forecasting/utils/_utils.py:480:89: E501 Line too long (130 > 88)
    |
478 | def detach(
479 |     x: dict[str, torch.Tensor | list[torch.Tensor] | tuple[torch.Tensor]] | torch.Tensor | list[torch.Tensor] | tuple[torch.Tensor],
480 | ) -> dict[str, torch.Tensor | list[torch.Tensor] | tuple[torch.Tensor]] | torch.Tensor | list[torch.Tensor] | tuple[torch.Tensor]:
    |                                                                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ E501
481 |     """
482 |     Detach object
    |

Found 406 errors (391 fixed, 15 remaining).

ruff-format..............................................................Failed
- hook id: ruff-format
- files were modified by this hook

10 files reformatted, 197 files left unchanged

nbqa-black...............................................................Passed
nbqa-ruff................................................................Passed
nbqa-check-ast...........................................................Passed


Is this a windows issue?

@phoeenniixx
Copy link
Member

py310 changes the formatting conventions drastically, that is why pre-commit is failing. Sorry, I didnot know that. I think I might need to raise a PR with just pre-commit run --all-files to solve this issue.

@fkiraly
Copy link
Collaborator Author

fkiraly commented Jan 11, 2026

no problem - could you kindly run precommit on these failing files though? The reason being, even if you raise a precommit with all, it will now create merge conflicts, and it might be easier to get this PR in first (and/or release)

@phoeenniixx
Copy link
Member

Done. I also updated the skbase version from <0.13.0 to <0.14.0 as 0.13.0 is the only version that supports python 3.14.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request maintenance Continuous integration, unit testing & package distribution

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants