Skip to content

Latest commit

 

History

History
62 lines (53 loc) · 4.2 KB

README.md

File metadata and controls

62 lines (53 loc) · 4.2 KB

TensorFlow-2 Pretrained RegNets

RegNet (Designing Network Design Spaces) implementation in TensorFlow-2 with pretrained weights.

Dependencies

  • Python ≥ 3.7
  • TensorFlow ≥ 2.5

Models

RegNetX

Model name Model name in paper Pretrained weights Top-1 (%)
RegNetX_400MF REGNETX-400MF RegNetX_400MF.h5 72.87
RegNetX_800MF REGNETX-800MF RegNetX_800MF.h5 75.21
RegNetX_1_6GF REGNETX-1.6GF RegNetX_1_6GF.h5 77.11
RegNetX_3_2GF REGNETX-3.2GF RegNetX_3_2GF.h5 78.33
RegNetX_8GF REGNETX-8.0GF RegNetX_8GF.h5 79.36
RegNetX_16GF REGNETX-16GF RegNetX_16GF.h5 79.98
RegNetX_32GF REGNETX-32GF RegNetX_32GF.h5 80.58

RegNetY

Model name Model name in paper Pretrained weights Top-1 (%)
RegNetY_400MF REGNETY-400MF RegNetY_400MF.h5 74.02
RegNetY_800MF REGNETY-800MF RegNetY_800MF.h5 76.44
RegNetY_1_6GF REGNETY-1.6GF RegNetY_1_6GF.h5 77.98
RegNetY_3_2GF REGNETY-3.2GF RegNetY_3_2GF.h5 78.94
RegNetY_8GF REGNETY-8.0GF RegNetY_8GF.h5 80.05
RegNetY_16GF REGNETY-16GF RegNetY_16GF.h5 80.43
RegNetY_32GF REGNETY-32GF RegNetY_32GF.h5 80.84
  • Pretrained weights are converted from TorchVision model zoo, and we only provide models of certain flop regimes that are available in the model zoo. Script for conversion is data/scripts/convert.py.
  • Top-1: 224x224 single-crop, top-1 accuracy using converted weights. Reproduce by:
# You need to register on http://www.image-net.org/download-images to get the link to
# download ILSVRC2012_img_val.tar.
mkdir ILSVRC2012_img_val/
tar xvf ILSVRC2012_img_val.tar -C ILSVRC2012_img_val/

python data/scripts/eval.py --h5 path/to/pretrained.h5 --data_dir ILSVRC2012_img_val/ --batch_size 32

Usage

import regnet

# Specify include_top=False if you want to remove the classification layer at the top
model = regnet.RegNetX_1_6GF(input_shape=(224, 224, 3),
                             weights="path/to/RegNetX_1_6GF.h5",
                             include_top=True)

model.compile(...)
model.fit(...)

Note: Input images should be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225].

License

MIT