OpenBox is an efficient and generalized blackbox optimization (BBO) system, which supports the following characteristics: 1) BBO with multiple objectives and constraints, 2) BBO with transfer learning, 3) BBO with distributed parallelization, 4) BBO with multi-fidelity acceleration and 5) BBO with early stops. OpenBox is designed and developed by the AutoML team from the DAIR Lab at Peking University, and its goal is to make blackbox optimization easier to apply both in industry and academia, and help facilitate data science.
Users can install the released package and use it with Python.
We adopt the "BBO as a service" paradigm and implement OpenBox as a managed general service for black-box optimization. Users can access this service via REST API conveniently, and do not need to worry about other issues such as environment setup, software maintenance, programming, and optimization of the execution. Moreover, we also provide a Web UI, through which users can easily track and manage the tasks.
The design of OpenBox follows the following principles:
- Ease of use: Minimal user effort, and user-friendly visualization for tracking and managing BBO tasks.
- Consistent performance: Host state-of-the-art optimization algorithms; Choose the proper algorithm automatically.
- Resource-aware management: Give cost-model-based advice to users, e.g., minimal workers or time-budget.
- Scalability: Scale to dimensions on the number of input variables, objectives, tasks, trials, and parallel evaluations.
- High efficiency: Effective use of parallel resources, system optimization with transfer-learning and multi-fidelities, etc.
- Fault tolerance, extensibility, and data privacy protection.
- Documentations | 中文文档
- Examples
- Pypi package
- Conda package: to appear soon
- Blog post: to appear soon
- OpenBox based solutions achieved the First Place of ACM CIKM 2021 AnalyticCup (Track - Automated Hyperparameter Optimization of Recommendation System).
- OpenBox team won the Top Prize (special prize) in the open-source innovation competition at 2021 CCF ChinaSoft conference.
- Pasca, which adopts Openbox to support neural architecture search functionality, won the Best Student Paper Award at WWW'22.
Build-in Optimization Components | Optimization Algorithms | Optimization Services |
|
|
Installation Requirements:
- Python >= 3.6 (Python 3.7 is recommended!)
Supported Systems:
- Linux (Ubuntu, ...)
- macOS
- Windows
We strongly suggest you to create a Python environment via Anaconda:
conda create -n openbox3.7 python=3.7
conda activate openbox3.7
Then update your pip
and setuptools
as follows:
pip install pip setuptools --upgrade
To install OpenBox from PyPI:
pip install openbox
To install the newest OpenBox package, just type the following scripts on the command line:
(Python >= 3.7 only. For Python == 3.6, please see our Installation Guide Document)
git clone https://github.com/PKU-DAIR/open-box.git && cd open-box
cat requirements/main.txt | xargs -n 1 -L 1 pip install
python setup.py install --user --prefix=
For more details about installation instructions, please refer to the Installation Guide Document.
A quick start example is given by:
import numpy as np
from openbox import Optimizer, sp
# Define Search Space
space = sp.Space()
x1 = sp.Real("x1", -5, 10, default_value=0)
x2 = sp.Real("x2", 0, 15, default_value=0)
space.add_variables([x1, x2])
# Define Objective Function
def branin(config):
x1, x2 = config['x1'], config['x2']
y = (x2-5.1/(4*np.pi**2)*x1**2+5/np.pi*x1-6)**2+10*(1-1/(8*np.pi))*np.cos(x1)+10
return y
# Run
if __name__ == '__main__':
opt = Optimizer(branin, space, max_runs=50, task_id='quick_start')
history = opt.run()
print(history)
The example with multi-objectives and constraints is as follows:
from openbox import Optimizer, sp
# Define Search Space
space = sp.Space()
x1 = sp.Real("x1", 0.1, 10.0)
x2 = sp.Real("x2", 0.0, 5.0)
space.add_variables([x1, x2])
# Define Objective Function
def CONSTR(config):
x1, x2 = config['x1'], config['x2']
y1, y2 = x1, (1.0 + x2) / x1
c1, c2 = 6.0 - 9.0 * x1 - x2, 1.0 - 9.0 * x1 + x2
return dict(objs=[y1, y2], constraints=[c1, c2])
# Run
if __name__ == "__main__":
opt = Optimizer(CONSTR, space, num_objs=2, num_constraints=2,
max_runs=50, ref_point=[10.0, 10.0], task_id='moc')
opt.run()
print(opt.get_history().get_pareto())
More Examples:
- Single-Objective with Constraints
- Multi-Objective
- Multi-Objective with Constraints
- Parallel Evaluation on Local
- Distributed Evaluation
- Tuning LightGBM
- Tuning XGBoost
OpenBox has a frequent release cycle. Please let us know if you encounter a bug by filling an issue.
We appreciate all contributions. If you are planning to contribute any bug-fixes, please do so without further discussions.
If you plan to contribute new features, new modules, etc. please first open an issue or reuse an existing issue, and discuss the feature with us.
To learn more about making a contribution to OpenBox, please refer to our How-to contribution page.
We appreciate all contributions and thank all the contributors!
- File an issue on GitHub.
- Email us via Yang Li or [email protected].
Targeting at openness and advancing AutoML ecosystems, we had also released few other open source projects.
- MindWare: an open source system that provides end-to-end ML model training and inference capabilities.
- SGL: a scalable graph learning toolkit for extremely large graph datasets.
OpenBox: A Generalized Black-box Optimization Service Yang Li, Yu Shen, Wentao Zhang, Yuanwei Chen, Huaijun Jiang, Mingchao Liu, Jiawei Jiang, Jinyang Gao, Wentao Wu, Zhi Yang, Ce Zhang, Bin Cui; ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD 2021, CCF-A). https://arxiv.org/abs/2106.00421
MFES-HB: Efficient Hyperband with Multi-Fidelity Quality Measurements Yang Li, Yu Shen, Jiawei Jiang, Jinyang Gao, Ce Zhang, Bin Cui; The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI 2021, CCF-A). https://arxiv.org/abs/2012.03011
PaSca: a Graph Neural Architecture Search System under the Scalable Paradigm Wentao Zhang, Yu Shen, Zheyu Lin, Yang Li, Xiaosen Li, Wen Ouyang, Yangyu Tao, Zhi Yang, and Bin Cui; The world wide web conference (WWW 2022, CCF-A, 🏆 Best Student Paper Award). https://arxiv.org/abs/2203.00638
Hyper-Tune: Towards Efficient Hyper-parameter Tuning at Scale Yang Li, Yu Shen, Huaijun Jiang, Wentao Zhang, Jixiang Li, Ji Liu, Ce Zhang, Bin Cui; The 48th International Conference on Very Large Data Bases (VLDB 2022, CCF-A). https://arxiv.org/abs/2201.06834
The entire codebase is under MIT license.