Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 5 additions & 5 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,7 @@ repos:
language: system
pass_filenames: false
- repo: https://github.com/adrienverge/yamllint
rev: v1.37.1
rev: v1.38.0
hooks:
- id: yamllint
args: ["--strict"]
Expand All @@ -62,10 +62,10 @@ repos:
args: ["--config", "./python-package/pyproject.toml"]
additional_dependencies:
- breathe>=4.36.0
- sphinx>=8.1.3
- sphinx_rtd_theme>=3.0.1
- sphinx>=9.1.0
- sphinx-rtd-theme>=3.0.2
- repo: https://github.com/astral-sh/ruff-pre-commit
rev: v0.14.10
rev: v0.14.13
hooks:
- id: ruff-check
args: ["--config", "python-package/pyproject.toml"]
Expand All @@ -92,7 +92,7 @@ repos:
args: ["--force-exclude"]
exclude: (\.gitignore$)|(^\.editorconfig$)
- repo: https://github.com/henryiii/validate-pyproject-schema-store
rev: 2025.11.21
rev: 2026.01.10
hooks:
- id: validate-pyproject
files: python-package/pyproject.toml$
Expand Down
4 changes: 2 additions & 2 deletions docs/Parallel-Learning-Guide.rst
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ See `this SynapseML example`_ for additional information on using LightGBM on Sp
Dask
^^^^

.. versionadded:: 3.2.0
.. version-added:: 3.2.0

LightGBM's Python-package supports distributed learning via `Dask`_. This integration is maintained by LightGBM's maintainers.

Expand Down Expand Up @@ -233,7 +233,7 @@ You could edit your firewall rules to allow communication between any of the wor
Using Custom Objective Functions with Dask
******************************************

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

It is possible to customize the boosting process by providing a custom objective function written in Python.
See the Dask API's documentation for details on how to implement such functions.
Expand Down
2 changes: 1 addition & 1 deletion docs/Python-API.rst
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Scikit-learn API
Dask API
--------

.. versionadded:: 3.2.0
.. version-added:: 3.2.0

.. autosummary::
:toctree: pythonapi/
Expand Down
4 changes: 2 additions & 2 deletions docs/README.rst
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ You can build the documentation locally without Docker. Just install Doxygen and

.. code:: sh

pip install breathe sphinx 'sphinx_rtd_theme>=0.5'
pip install 'breathe>=4.36.0' 'sphinx>=9.1.0' 'sphinx-rtd-theme>=0.5'
make html

Note that this will not build the R documentation.
Expand All @@ -64,6 +64,6 @@ If you faced any problems with Doxygen installation or you simply do not need do

.. code:: sh

pip install sphinx 'sphinx_rtd_theme>=0.5'
pip install 'sphinx>=9.1.0' 'sphinx-rtd-theme>=3.0.2'
export C_API=NO || set C_API=NO
make html
8 changes: 4 additions & 4 deletions docs/env.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,9 +3,9 @@ channels:
- nodefaults
- conda-forge
dependencies:
- breathe>=4.36
- breathe>=4.36.0
- doxygen>=1.13.2
- python=3.12
- python=3.14
- r-base>=4.5.1
- r-data.table=1.17.8
- r-jsonlite=2.0.0
Expand All @@ -16,5 +16,5 @@ dependencies:
- r-roxygen2=7.3.3
# skipping scikit-learn 1.7.1 because of the problems described in https://github.com/microsoft/LightGBM/issues/6978
- scikit-learn>=1.6.1,!=1.7.1
- sphinx>=8.1.3
- sphinx_rtd_theme>=3.0.1
- sphinx>=9.1.0
- sphinx-rtd-theme>=3.0.2
26 changes: 13 additions & 13 deletions python-package/lightgbm/basic.py
Original file line number Diff line number Diff line change
Expand Up @@ -927,7 +927,7 @@ class Sequence(abc.ABC):
- With random access, **data sampling does not need to go through all data**.
- With range data access, there's **no need to read all data into memory thus reduce memory usage**.

.. versionadded:: 3.3.0
.. version-added:: 3.3.0

Attributes
----------
Expand Down Expand Up @@ -1127,7 +1127,7 @@ def predict(
If True, ensure that the features used to predict match the ones used to train.
Used only if data is pandas DataFrame.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

Returns
-------
Expand Down Expand Up @@ -3366,7 +3366,7 @@ def num_feature(self) -> int:
def feature_num_bin(self, feature: Union[int, str]) -> int:
"""Get the number of bins for a feature.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

Parameters
----------
Expand Down Expand Up @@ -4781,12 +4781,12 @@ def refit(
reference : Dataset or None, optional (default=None)
Reference for ``data``.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

weight : list, numpy 1-D array, pandas Series, pyarrow Array, pyarrow ChunkedArray or None, optional (default=None)
Weight for each ``data`` instance. Weights should be non-negative.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

group : list, numpy 1-D array, pandas Series, pyarrow Array, pyarrow ChunkedArray or None, optional (default=None)
Group/query size for ``data``.
Expand All @@ -4795,18 +4795,18 @@ def refit(
For example, if you have a 100-document dataset with ``group = [10, 20, 40, 10, 10, 10]``, that means that you have 6 groups,
where the first 10 records are in the first group, records 11-30 are in the second group, records 31-70 are in the third group, etc.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

init_score : list, list of lists (for multi-class task), numpy array, pandas Series, pandas DataFrame (for multi-class task), pyarrow Array, pyarrow ChunkedArray, pyarrow Table (for multi-class task) or None, optional (default=None)
Init score for ``data``.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

feature_name : list of str, or 'auto', optional (default="auto")
Feature names for ``data``.
If 'auto' and data is pandas DataFrame, data columns names are used.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

categorical_feature : list of str or int, or 'auto', optional (default="auto")
Categorical features for ``data``.
Expand All @@ -4819,23 +4819,23 @@ def refit(
The output cannot be monotonically constrained with respect to a categorical feature.
Floating point numbers in categorical features will be rounded towards 0.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

dataset_params : dict or None, optional (default=None)
Other parameters for Dataset ``data``.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

free_raw_data : bool, optional (default=True)
If True, raw data is freed after constructing inner Dataset for ``data``.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

validate_features : bool, optional (default=False)
If True, ensure that the features used to refit the model match the original ones.
Used only if data is pandas DataFrame.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

**kwargs
Other parameters for refit.
Expand Down Expand Up @@ -4940,7 +4940,7 @@ def set_leaf_output(
) -> "Booster":
"""Set the output of a leaf.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

Parameters
----------
Expand Down
2 changes: 1 addition & 1 deletion python-package/lightgbm/callback.py
Original file line number Diff line number Diff line change
Expand Up @@ -488,7 +488,7 @@ def early_stopping(
If float, this single value is used for all metrics.
If list, its length should match the total number of metrics.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

Returns
-------
Expand Down
6 changes: 3 additions & 3 deletions python-package/lightgbm/plotting.py
Original file line number Diff line number Diff line change
Expand Up @@ -665,7 +665,7 @@ def create_tree_digraph(
Single row with the same structure as the training data.
If not None, the plot will highlight the path that sample takes through the tree.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

max_category_values : int, optional (default=10)
The maximum number of category values to display in tree nodes, if the number of thresholds is greater than this value, thresholds will be collapsed and displayed on the label tooltip instead.
Expand All @@ -683,7 +683,7 @@ def create_tree_digraph(
graph = lgb.create_tree_digraph(clf, max_category_values=5)
HTML(graph._repr_image_svg_xml())

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

**kwargs
Other parameters passed to ``Digraph`` constructor.
Expand Down Expand Up @@ -800,7 +800,7 @@ def plot_tree(
Single row with the same structure as the training data.
If not None, the plot will highlight the path that sample takes through the tree.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0

**kwargs
Other parameters passed to ``Digraph`` constructor.
Expand Down
8 changes: 4 additions & 4 deletions python-package/lightgbm/sklearn.py
Original file line number Diff line number Diff line change
Expand Up @@ -584,7 +584,7 @@ def __init__(
to using the number of physical cores in the system (its correct detection requires
either the ``joblib`` or the ``psutil`` util libraries to be installed).

.. versionchanged:: 4.0.0
.. version-changed:: 4.0.0

importance_type : str, optional (default='split')
The type of feature importance to be filled into ``feature_importances_``.
Expand Down Expand Up @@ -1233,7 +1233,7 @@ def n_estimators_(self) -> int:
This might be less than parameter ``n_estimators`` if early stopping was enabled or
if boosting stopped early due to limits on complexity like ``min_gain_to_split``.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0
"""
if not self.__sklearn_is_fitted__():
raise LGBMNotFittedError("No n_estimators found. Need to call fit beforehand.")
Expand All @@ -1246,7 +1246,7 @@ def n_iter_(self) -> int:
This might be less than parameter ``n_estimators`` if early stopping was enabled or
if boosting stopped early due to limits on complexity like ``min_gain_to_split``.

.. versionadded:: 4.0.0
.. version-added:: 4.0.0
"""
if not self.__sklearn_is_fitted__():
raise LGBMNotFittedError("No n_iter found. Need to call fit beforehand.")
Expand Down Expand Up @@ -1295,7 +1295,7 @@ def feature_name_(self) -> List[str]:
def feature_names_in_(self) -> np.ndarray:
""":obj:`array` of shape = [n_features]: scikit-learn compatible version of ``.feature_name_``.

.. versionadded:: 4.5.0
.. version-added:: 4.5.0
"""
if not self.__sklearn_is_fitted__():
raise LGBMNotFittedError("No feature_names_in_ found. Need to call fit beforehand.")
Expand Down
Loading