Skip to content

Qualcomm-AI-research/neural-simulated-annealing

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Neural Simulated Annealing

This repository will contain the implementation for the paper

Alvaro H.C. Correia*1, Daniel E. Worrall*2, Roberto Bondesan2 "Neural Simulated Annealing". [ArXiv]

*Equal contribution

1 Eindhoven University of Technology, Eindhoven, The Netherlands (Work done during internship at Qualcomm AI Research).

2 Qualcomm AI Research, Qualcomm Technologies Netherlands B.V. (Qualcomm AI Research is an initiative of Qualcomm Technologies, Inc.).

Reference

If you find our work useful, please cite

@inproceedings{correia2023neural,
  title={Neural simulated annealing},
  author={Correia, Alvaro HC and Worrall, Daniel E and Bondesan, Roberto},
  booktitle={International Conference on Artificial Intelligence and Statistics},
  pages={4946--4962},
  year={2023},
  organization={PMLR}
}

How to install

Make sure to have Python ≥3.10 (tested with Python 3.10.11) and ensure the latest version of pip (tested with 22.3.1):

pip install --upgrade --no-deps pip

Next, install PyTorch 1.13.0 with the appropriate CUDA version (tested with CUDA 11.7):

python -m pip install torch==1.13.0+cu117 --extra-index-url https://download.pytorch.org/whl/cu117

Finally, install the remaining dependencies using pip:

pip install -r requirements.txt

To run the code, the project root directory needs to be added to your PYTHONPATH:

export PYTHONPATH="${PYTHONPATH}:$PWD"

Running experiments

Training

The main run file to reproduce all experiments is main.py. We use Hydra to configure experiments, so you can retrain our Neural SA models as follows

python scripts/main.py +experiment=<config_file>

where <config_file> is a yaml file defining the experiment configuration. The experiments in the paper are configured via the config files in the scripts/conf/experiment folder, which are named as <problem>_<method>.yaml. For instance, to train a Knapsack model using PPO with the configuration used in the paper you should run

python scripts/main.py +experiment=knapsack_ppo

To experiment with different configurations, you can either create a new yaml file and use it on the command line as above, or you can change specific variables with Hydra's override syntax. As an example, if you want to keep the same configuration of the Knapsack experiments with PPO but train with problems of size 100 and for 500 steps, you can do so as follows

python scripts/main.py +experiment=knapsack_ppo ++problem_dim=100 ++sa.outer_steps=500

See neuralsa/config.py for an overview of the different configuration variables. Note that for parameters in the SAConfig class you need to prepend its name with "sa" (as in the example above), and for the TrainingConfig class you need to prepend the parameter name with "training".

The trained model is saved in outputs/models/<problem><problem_dim>-<training.method>.

Evaluation

Evaluation with the same settings used in the paper can be done with eval.py script. As before, this can be configured with Hydra, for instance:

python scripts/eval.py +experiment=knapsack_ppo

The eval.py script already sweeps over the different number of steps (sa.outer_steps) considered in paper. It also runs vanilla Simulated Annealing and a greedy variant of Neural SA. The results are stored in outputs/results/<problem> folder, and can be aggregated and printed with the print_results.py script:

python scripts/print_results.py +experiment=knapsack_ppo

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages