From 8b3245d53ef4e1c8c79094184c4ad6039fdfc436 Mon Sep 17 00:00:00 2001 From: sfraczek Date: Thu, 27 Oct 2016 10:22:01 +0200 Subject: [PATCH] =?UTF-8?q?Updated=20IntelCaffe=20references=20to=20Intel?= =?UTF-8?q?=C2=AE=20Distribution=20of=20Caffe*?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- README.md | 7 +++++-- docs/release_notes.md | 16 +++++++++------- 2 files changed, 14 insertions(+), 9 deletions(-) diff --git a/README.md b/README.md index 6af1631ec..9f2366b36 100644 --- a/README.md +++ b/README.md @@ -20,7 +20,7 @@ Framework development discussions and thorough bug reports are collected on [Iss Happy brewing! -# Intel Caffe +# Intel® Distribution of Caffe* This fork is dedicated to improving Caffe performance when running on CPU, in particular Intel® Xeon processors (HSW, BDW, Xeon Phi) ## Building @@ -43,7 +43,7 @@ limit execution of OpenMP threads to specified cores only. Please read [release notes](https://github.com/intel/caffe/blob/master/docs/release_notes.md) for our recommendations and configuration to achieve best performance on Intel CPUs. ## Multinode Training -Intel Caffe multinode allows you to execute deep neural network training on multiple machines. +Intel® Distribution of Caffe* multi-node allows you to execute deep neural network training on multiple machines. To understand how it works and read some tutorials, go to our Wiki. Start from https://github.com/intelcaffe/caffe/wiki/Multinode-guide. @@ -59,3 +59,6 @@ Please cite Caffe in your publications if it helps your research: Title = {Caffe: Convolutional Architecture for Fast Feature Embedding}, Year = {2014} } + +*** + *Other names and brands may be claimed as the property of others diff --git a/docs/release_notes.md b/docs/release_notes.md index e57161512..16b2940a0 100644 --- a/docs/release_notes.md +++ b/docs/release_notes.md @@ -64,10 +64,10 @@ This fork is dedicated to improving Caffe performance when running on CPU, in pa # Installation Prior to installing, have a glance through this guide and take note of the details for your platform. -We build and test Caffe on CentOS (7.0, 7.1, 7.2). +We build and test Intel® Distribution of Caffe* on CentOS (7.0, 7.1, 7.2). The official Makefile and `Makefile.config` build are complemented by an automatic CMake build from the community. -When updating Caffe, it's best to `make clean` before re-compiling. +When updating Intel® Distribution of Caffe*, it's best to `make clean` before re-compiling. ## Prerequisites @@ -118,15 +118,15 @@ Install MATLAB, and make sure that its `mex` is in your `$PATH`. ##Building for Intel® Architecture -This version of Caffe is optimized for Intel® Xeon processors and Intel® Xeon Phi™ processors. To achieve the best performance results on Intel Architecture we recommend building Caffe with [Intel® MKL](http://software.intel.com/en-us/intel-mkl) and enabling OpenMP support. -This Caffe version is seflcontained. This means that newest version of Intel MKL will be downloaded and installed during compilation of IntelCaffe. +This version of Caffe is optimized for Intel® Xeon processors and Intel® Xeon Phi™ processors. To achieve the best performance results on Intel Architecture we recommend building Intel® Distribution of Caffe* with [Intel® MKL](http://software.intel.com/en-us/intel-mkl) and enabling OpenMP support. +This Caffe version is seflcontained. This means that newest version of Intel MKL will be downloaded and installed during compilation of Intel® Distribution of Caffe*. * Set `BLAS := mkl` in `Makefile.config` -* If you don't need GPU optimizations `CPU_ONLY := 1` flag in `Makefile.config` to configure and build Caffe without CUDA. +* If you don't need GPU optimizations `CPU_ONLY := 1` flag in `Makefile.config` to configure and build Intel® Distribution of Caffe* without CUDA. -[Intel MKL 2017] introduces optimized Deep Neural Network (DNN) performance primitives that allow to accelerate the most popular image recognition topologies. Caffe can take advantage of these primitives and get significantly better performance results compared to the previous versions of Intel MKL. There are two ways to take advantage of the new primitives: +[Intel MKL 2017] introduces optimized Deep Neural Network (DNN) performance primitives that allow to accelerate the most popular image recognition topologies. Intel® Distribution of Caffe* can take advantage of these primitives and get significantly better performance results compared to the previous versions of Intel MKL. There are two ways to take advantage of the new primitives: -* As default and recommended configuration Caffe is build with `USE_MKL2017_AS_DEFAULT_ENGINE := 1` in `Makefile.config`. All layers that will not have oher engine set in prototxt file (model) will use new Intel MKL primitives by default. +* As default and recommended configuration Intel® Distribution of Caffe* is build with `USE_MKL2017_AS_DEFAULT_ENGINE := 1` in `Makefile.config`. All layers that will not have oher engine set in prototxt file (model) will use new Intel MKL primitives by default. * Set layer engine to `MKL2017` in prototxt file (model). Only this specific layer will be accelerated with new primitives. * `USE_MKLDNN_AS_DEFAULT_ENGINE := 1` in `Makefile.config` is new integration with new MKLDNN engine. This is experimental solution - not recommended for buissnes users. @@ -295,3 +295,5 @@ In folder `/examples/imagenet/` we provide scripts and instructions `readme.md` Caffe is released under the [BSD 2-Clause license](https://github.com/BVLC/caffe/blob/master/LICENSE). The BVLC reference models are released for unrestricted use. +*** + *Other names and brands may be claimed as the property of others