This repository has been archived by the owner on Aug 5, 2022. It is now read-only.
Caffe_v1.1.1
- Features
- INT8 inference
Inference speed improved with upgraded MKL-DNN library.
Accuracy improved with channel-wise scaling factor. Support added in calibration tool as well. - Multi-node training
Better training scalability on 10Gbe with prioritized communication in gradient all-reduce.
Support Python binding for multi-node training in pycaffe.
Default build now includes multi-node training feature. - Layer performance optimization: dilated convolution and softmax
- Auxiliary scripts
Added a script to parse the training log and plot loss trends (tools/extra/caffe_log_parser.py and tools/extra/plot_loss_trends.py).
Added a script to identify the batch size for optimal throughput given a model (scripts/obtain_optimal_batch_size.py).
Improved benchmark scripts to support Inception-V3 and VGG-16 - New models
Support inference of R-FCN object detection model.
Added the Inception-V3 multi-node model that converges to SOTA. - Build improvement
Merged PR#167 "Extended cmake install package script for MKL"
Fixed all ICC/GCC compiler warnings and enabled warning as error.
Added build options to turn off each inference model optimization.
Do not try to download MKL-DNN when there is no network connection.
- Misc
- MLSL upgraded to 2018-Preview
- MKL-DNN upgraded to 464c268e544bae26f9b85a2acb9122c766a4c396