Skip to content
This repository has been archived by the owner on Aug 5, 2022. It is now read-only.

Commit

Permalink
Updated IntelCaffe references to Intel® Distribution of Caffe*
Browse files Browse the repository at this point in the history
  • Loading branch information
sfraczek committed Oct 27, 2016
1 parent a6a1b2f commit 8b3245d
Show file tree
Hide file tree
Showing 2 changed files with 14 additions and 9 deletions.
7 changes: 5 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,7 @@ Framework development discussions and thorough bug reports are collected on [Iss

Happy brewing!

# Intel Caffe
# Intel® Distribution of Caffe*
This fork is dedicated to improving Caffe performance when running on CPU, in particular Intel® Xeon processors (HSW, BDW, Xeon Phi)

## Building
Expand All @@ -43,7 +43,7 @@ limit execution of OpenMP threads to specified cores only.
Please read [release notes](https://github.com/intel/caffe/blob/master/docs/release_notes.md) for our recommendations and configuration to achieve best performance on Intel CPUs.

## Multinode Training
Intel Caffe multinode allows you to execute deep neural network training on multiple machines.
Intel® Distribution of Caffe* multi-node allows you to execute deep neural network training on multiple machines.

To understand how it works and read some tutorials, go to our Wiki. Start from https://github.com/intelcaffe/caffe/wiki/Multinode-guide.

Expand All @@ -59,3 +59,6 @@ Please cite Caffe in your publications if it helps your research:
Title = {Caffe: Convolutional Architecture for Fast Feature Embedding},
Year = {2014}
}

***
*Other names and brands may be claimed as the property of others
16 changes: 9 additions & 7 deletions docs/release_notes.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,10 +64,10 @@ This fork is dedicated to improving Caffe performance when running on CPU, in pa
# Installation

Prior to installing, have a glance through this guide and take note of the details for your platform.
We build and test Caffe on CentOS (7.0, 7.1, 7.2).
We build and test Intel® Distribution of Caffe* on CentOS (7.0, 7.1, 7.2).
The official Makefile and `Makefile.config` build are complemented by an automatic CMake build from the community.

When updating Caffe, it's best to `make clean` before re-compiling.
When updating Intel® Distribution of Caffe*, it's best to `make clean` before re-compiling.

## Prerequisites

Expand Down Expand Up @@ -118,15 +118,15 @@ Install MATLAB, and make sure that its `mex` is in your `$PATH`.

##Building for Intel® Architecture

This version of Caffe is optimized for Intel® Xeon processors and Intel® Xeon Phi™ processors. To achieve the best performance results on Intel Architecture we recommend building Caffe with [Intel® MKL](http://software.intel.com/en-us/intel-mkl) and enabling OpenMP support.
This Caffe version is seflcontained. This means that newest version of Intel MKL will be downloaded and installed during compilation of IntelCaffe.
This version of Caffe is optimized for Intel® Xeon processors and Intel® Xeon Phi™ processors. To achieve the best performance results on Intel Architecture we recommend building Intel® Distribution of Caffe* with [Intel® MKL](http://software.intel.com/en-us/intel-mkl) and enabling OpenMP support.
This Caffe version is seflcontained. This means that newest version of Intel MKL will be downloaded and installed during compilation of Intel® Distribution of Caffe*.

* Set `BLAS := mkl` in `Makefile.config`
* If you don't need GPU optimizations `CPU_ONLY := 1` flag in `Makefile.config` to configure and build Caffe without CUDA.
* If you don't need GPU optimizations `CPU_ONLY := 1` flag in `Makefile.config` to configure and build Intel® Distribution of Caffe* without CUDA.

[Intel MKL 2017] introduces optimized Deep Neural Network (DNN) performance primitives that allow to accelerate the most popular image recognition topologies. Caffe can take advantage of these primitives and get significantly better performance results compared to the previous versions of Intel MKL. There are two ways to take advantage of the new primitives:
[Intel MKL 2017] introduces optimized Deep Neural Network (DNN) performance primitives that allow to accelerate the most popular image recognition topologies. Intel® Distribution of Caffe* can take advantage of these primitives and get significantly better performance results compared to the previous versions of Intel MKL. There are two ways to take advantage of the new primitives:

* As default and recommended configuration Caffe is build with `USE_MKL2017_AS_DEFAULT_ENGINE := 1` in `Makefile.config`. All layers that will not have oher engine set in prototxt file (model) will use new Intel MKL primitives by default.
* As default and recommended configuration Intel® Distribution of Caffe* is build with `USE_MKL2017_AS_DEFAULT_ENGINE := 1` in `Makefile.config`. All layers that will not have oher engine set in prototxt file (model) will use new Intel MKL primitives by default.
* Set layer engine to `MKL2017` in prototxt file (model). Only this specific layer will be accelerated with new primitives.

* `USE_MKLDNN_AS_DEFAULT_ENGINE := 1` in `Makefile.config` is new integration with new MKLDNN engine. This is experimental solution - not recommended for buissnes users.
Expand Down Expand Up @@ -295,3 +295,5 @@ In folder `/examples/imagenet/` we provide scripts and instructions `readme.md`

Caffe is released under the [BSD 2-Clause license](https://github.com/BVLC/caffe/blob/master/LICENSE). The BVLC reference models are released for unrestricted use.

***
*Other names and brands may be claimed as the property of others

0 comments on commit 8b3245d

Please sign in to comment.