Skip to content

Commit

Permalink
VW version requirement and documentation on config_constraints vs met…
Browse files Browse the repository at this point in the history
…ric_constraints (#686)

* add vw version requirement

* vw version

* version range

* add documentation

* vw version range

* skip test on py3.10

* vw version

* rephrase

* don't install vw on py 3.10

* move import location

* remove inherit

* 3.10 in version

Co-authored-by: Chi Wang <[email protected]>
  • Loading branch information
qingyun-wu and sonichi authored Aug 16, 2022
1 parent 6c7d373 commit 8b3c6e4
Show file tree
Hide file tree
Showing 4 changed files with 25 additions and 14 deletions.
4 changes: 4 additions & 0 deletions .github/workflows/python-package.yml
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,10 @@ jobs:
if: (matrix.os == 'macOS-latest' || matrix.os == 'ubuntu-latest') && matrix.python-version != '3.9' && matrix.python-version != '3.10'
run: |
pip install -e .[forecast]
- name: Install vw on python < 3.10
if: matrix.python-version != '3.10'
run: |
pip install -e .[vw]
- name: Lint with flake8
run: |
# stop the build if there are Python syntax errors or undefined names
Expand Down
3 changes: 1 addition & 2 deletions setup.py
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,6 @@
"catboost>=0.26",
"rgf-python",
"optuna==2.8.0",
"vowpalwabbit",
"openml",
"statsmodels>=0.12.2",
"psutil==5.8.0",
Expand All @@ -79,7 +78,7 @@
"nni",
],
"vw": [
"vowpalwabbit",
"vowpalwabbit>=8.10.0, <9.0.0",
],
"nlp": [
"transformers[torch]==4.18",
Expand Down
11 changes: 8 additions & 3 deletions test/test_autovw.py
Original file line number Diff line number Diff line change
@@ -1,18 +1,17 @@
import unittest

import numpy as np
import scipy.sparse

import pandas as pd
from sklearn.metrics import mean_squared_error, mean_absolute_error
import logging
from flaml.tune import loguniform, polynomial_expansion_set
from vowpalwabbit import pyvw
from flaml import AutoVW
import string
import os
import openml
from requests.exceptions import SSLError
import sys
import pytest

VW_DS_DIR = "test/data/"
NS_LIST = list(string.ascii_lowercase) + list(string.ascii_uppercase)
Expand Down Expand Up @@ -369,8 +368,14 @@ def get_vw_tuning_problem(tuning_hp="NamesapceInteraction"):
return vw_oml_problem_args, vw_online_aml_problem


@pytest.mark.skipif(
"3.10" in sys.version,
reason="do not run on py 3.10",
)
class TestAutoVW(unittest.TestCase):
def test_vw_oml_problem_and_vanilla_vw(self):
from vowpalwabbit import pyvw

vw_oml_problem_args, vw_online_aml_problem = get_vw_tuning_problem()
vanilla_vw = pyvw.vw(**vw_oml_problem_args["fixed_hp_config"])
cumulative_loss_list = online_learning_loop(
Expand Down
21 changes: 12 additions & 9 deletions website/docs/Use-Cases/Tune-User-Defined-Function.md
Original file line number Diff line number Diff line change
Expand Up @@ -265,24 +265,27 @@ A user can specify constraints on the configurations to be satisfied via the arg
In the following code example, we constrain the output of `area`, which takes a configuration as input and outputs a numerical value, to be no larger than 1000.

```python
def area(config):
return config["width"] * config["height"]
def my_model_size(config):
return config["n_estimators"] * config["max_leaves"]

flaml.tune.run(evaluation_function=evaluate_config, mode="min",
config=config_search_space,
config_constraints=[(area, "<=", 1000)], ...)
analysis = tune.run(...,
config_constraints = [(my_model_size, "<=", 40)],
)
```

You can also specify a list of metric constraints to be satisfied via the argument `metric_constraints`. Each element in the `metric_constraints` list is a tuple that consists of (1) a string specifying the name of the metric (the metric name must be defined and returned in the user-defined `evaluation_function`); (2) an operation chosen from "<=" or ">="; (3) a numerical threshold.

In the following code example, we constrain the metric `score` to be no larger than 0.4.
In the following code example, we constrain the metric `training_cost` to be no larger than 1 second.

```python
flaml.tune.run(evaluation_function=evaluate_config, mode="min",
config=config_search_space,
metric_constraints=[("score", "<=", 0.4)],...)
analysis = tune.run(...,
metric_constraints = [("training_cost", "<=", 1)]),
```

#### **`config_constraints` vs `metric_constraints`:**
The key difference between these two types of constraints is that the calculation of constraints in `config_constraints` does not rely on the computation procedure in the evaluation function, i.e., in `evaluation_function`. For example, when a constraint only depends on the config itself, as shown in the code example. Due to this independency, constraints in `config_constraints` will be checked before evaluation. So configurations that do not satisfy `config_constraints` will not be evaluated.


### Parallel tuning

Related arguments:
Expand Down

0 comments on commit 8b3c6e4

Please sign in to comment.