Skip to content

cure-lab/sat-benchmark

Repository files navigation

SAT Benchmark

Introduction

SAT Benchmark (satb) is a PyTorch implementation of a collection of published end-to-end SAT solvers with the ability to reproduce similar results to those presented in the original papers. Hope we can build a comprehensive and practical codebase like timm, and boost the AI4SAT research.

All contributions (no matter if small) are always welcome!!!

Models

  1. NeuroSAT/NeuroCore
  2. DG-DAGRNN
  3. SATformer
  4. DeepSAT
  5. QuerySAT
  6. FourierSAT
  7. TBD

Datasets

We generate the common datasets used in end-to-end SAT training/inference. Various formats include CNF, Circuit by cube and conquer, and Circuit by abc optimization. For more details, see DATA.md

  1. SR3-10
  2. SR10-40
  3. SR3-100
  4. Graph
    • coloring
    • dominating set
    • clique detection
    • vertex cover
  5. Logic equivalent checking
  6. SAT-Sweeping
  7. SHA-1 preimage attack

Features

Following list some features in my mind that should be realized in our codebase. Some features are borrowed from timm directly.

  • All models have a common default configuration interface and API for
    • doing a forward pass on just features (or recursively) forward_features. The format of features depends on the representation of problems, e.g., CNFs, Circuits, etc.
    • accessing the solver module decode_assignment to decode the assignemnts.
    • (minor) gradient checking.
  • High performance reference training, validation, and inference scripts that work in several process/GPU modes:
    • NVIDIA DDP w/ a single GPU per process, multiple processes with APEX present (AMP mixed-precision optional)
    • PyTorch DistributedDataParallel w/ multi-gpu, single process (AMP disabled as it crashes when enabled)
    • PyTorch w/ single GPU single process (AMP optional)
  • Optimizers/Scheduler are directly borrowed from timm.

Training examples

DeepSAT

sh distributed_train.sh 4 data/random_sr3_10_100/deepsat_dataset.npz --dataset gate --model deepsat -b 128 --sched step --epochs 100 --decay-epochs 2.4 --decay-rate .97 --opt rmsproptf --opt-eps .001 -j 8 --warmup-lr 1e-6 --weight-decay 1e-5 --amp --lr .016

DGDAGRNN

sh distributed_train.sh 4 data/random_sr3_10_100 --dataset ckt --model dgdagrnn -b 8 --sched step --epochs 100 --decay-epochs 2.4 --decay-rate .97 --opt rmsproptf --opt-eps .001 -j 8 --warmup-lr 1e-6 --weight-decay 1e-5 --amp --lr .016 --smooth-step-loss

TODO

  1. Consider the relationship between timm transformation (timm.data.transforms) and circuit transformation in EDA. We may consider the circuit transformation as one type of data augmentation and construct a comprehensive circuit transformation functionality (should be compatible with TorchVision Transformation and Dataloader Structure). Current implementation processes SAT instances directly without any transforms.
  2. Enable gradient checking in all models.
  3. Transfer the circuit parser utility to timm.data.parsers style.

About

Benchmarking end-to-end SAT solvers.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published