Skip to content
This repository was archived by the owner on Jan 3, 2023. It is now read-only.

Commit 8adf498

Browse files
authored
Cyphers/27sync (#4059)
* [ONNX] Add GatherND op to ONNX importer (#3963) * [ONNX] Added gatherND op to ONNX importer. * Added tests. * Removed new line. * Update onnx_import.in.cpp * Changed tests. * Fix serialization of op's type (#4019) * Fix serializer op name * cleanup * Make sure examples compile (#3981) * Make sure examples compile * Resolve doc build error due to opset versioning and align dynamic tensor doc to cpp example * Add latest rc * Remove deprecated API * Update brief link summary * Dist example * update doc for cpp code examples folder * Fix typo and toc index * Build config for example, deprecation in dist test * style * Update jenkins-trigger.groovy Moving to gitlab. * Fix layernorm flatten issue (#4032) * fix layernorm flatten issue * update ut * checkout output val * fix style * apply tolerance * [MKLDNN] Emit dgemm for 2D DP FP Dot op (#3990) * [MLIR] Update MLIR/LLVM repos * Move MLIR/LLVM repos forward This includes fix to affine fusion algorithm. * Fix issues after merge * Fix lit test * [MKLDNN] Emit dgemm for 2D DP FP Dot op Add support for emitting MKLDNN's double precision FP gemm from a 2D double precision floating point Dot operation. * Removed unnecessarily duplicated pattern * Add f64 matmul support to CPU Emitter + unit test * Add check for DP unsupported bias in cpu_fusion.cpp * Remove GOE from Adjoints class (#3973) * Change generate_adjoints to take an OutputVector instead of a NodeVector for deltas. * Cleanup * Adjoints class convert to use Output<Node> * More cleanup * More cleanup * Post-merge build issues * Don't push initial bprops multiple times * Eliminate GOE correctly * back-compatibility, unit test * Helper in Constant to allow casting values to a different type (#4000) * Helper in Constant to allow casting values to a different type Simplify logic needed to extract values from a Constant node, when the expected data type is specified only as integral or floating point. * Review comment * Review comment Co-Authored-By: Tomasz Socha <[email protected]> * Style apply * TensorIterator: reshape support (#4038) * Add second decompose pass to INTERPRETER backend (#4036) * Update MKLDNN to v1.0.4. (#3951) * Update MKLDNN to v1.0.4. Build MKLDNN-v1 by default. * Add bf16 support check. * Modify visibility. * Tested doc build for 0.27 with sitemap for ngraph.ai endpoint (#4014) * Make sure examples compile * Resolve doc build error due to opset versioning and align dynamic tensor doc to cpp example * Add latest rc * Remove deprecated API * Update brief link summary * Dist example * update doc for cpp code examples folder * Fix typo and toc index * Build config for example, deprecation in dist test * style * Sitemap for ngraph.ai doc content * Add title to sitemap * resolve docbuild warnings resultant from sitemap link labeling * Doc tag for 0.27.1 * Matmul dyshape_fused_forward_fluid_fix (#4023) * use constructor_validate_and_infer_types() in CumSum ctor (#4044) * - use construct_validate_infer_types() in CumSum ctor * - remove unused variable - relax rank check * Warning * Fix tolerance for all_close_f (#4042) * Fix tolerance for all_close_f * Lower tolerance * use all_close * Use v1::Gather in ONNX Importer (#4037) * Add upgrade and downgrade pass for GroupConvolutionBackpropData ops (#4035) * Add upgrade and downgrade pass for GroupConvolutionBackpropData ops - Add up/downgrade passes for GroupConvolutionBackpropData operators - Improve decompose operatorion of v0::GroupConvolutionBackpropData to support N-dimensional data - Add UT for up/downgrade passes. * Remove unused variable * Fixed constant operation for u1 format (#4045) * Fixed bin constant ops * Added export * Fixed buffer size * Fixed code style * Fix broken serialize and deserialize for Sum and Product (#4050) * v1::Reshape downgrade pass + onnx_importer adjustments (#4046) * Update ONNX importer to use nGraph ops from new opset header (#3994) * Fix NNP-T naming in README (#4048)
1 parent 6fb1c5e commit 8adf498

File tree

375 files changed

+2313
-1321
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

375 files changed

+2313
-1321
lines changed

CMakeLists.txt

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -164,7 +164,7 @@ option(NGRAPH_UNIT_TEST_ENABLE "Control the building of unit tests" TRUE)
164164
option(NGRAPH_DOC_BUILD_ENABLE "Control the building of documentation" FALSE)
165165
option(NGRAPH_TOOLS_ENABLE "Control the building of tool" TRUE)
166166
option(NGRAPH_CPU_ENABLE "Control the building of the CPU backend" TRUE)
167-
option(NGRAPH_USE_LEGACY_MKLDNN "Use legacy MKLDNN" TRUE)
167+
option(NGRAPH_USE_LEGACY_MKLDNN "Use legacy MKLDNN" FALSE)
168168
option(NGRAPH_MLIR_ENABLE "Control the building of MLIR backend" FALSE)
169169
option(NGRAPH_INTERPRETER_ENABLE "Control the building of the INTERPRETER backend" TRUE)
170170
option(NGRAPH_NOP_ENABLE "Control the building of the NOP backend" TRUE)
@@ -621,6 +621,7 @@ endif()
621621
add_subdirectory(src)
622622

623623
add_subdirectory(test)
624+
add_subdirectory(doc/examples)
624625

625626
if (NGRAPH_DOC_BUILD_ENABLE)
626627
add_subdirectory(doc)

README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -45,8 +45,8 @@ framework and deploying to a variety of hardware targets. We strongly believe in
4545
providing freedom, performance, and ease-of-use to AI developers.
4646

4747
The diagram below shows deep learning frameworks and hardware targets
48-
supported by nGraph. NNP-L and NNP-I in the diagram refer to Intel's next generation
49-
deep learning accelerators: Intel® Nervana™ Neural Network Processor for Learning and
48+
supported by nGraph. NNP-T and NNP-I in the diagram refer to Intel's next generation
49+
deep learning accelerators: Intel® Nervana™ Neural Network Processor for Training and
5050
Inference respectively. Future plans for supporting addtional deep learning frameworks
5151
and backends are outlined in the [ecosystem] section.
5252

cmake/external_mkldnn_v1.cmake

Lines changed: 11 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -18,10 +18,12 @@ include(ExternalProject)
1818

1919
# Includes blas 3.8.0 in mkldnn
2020
set(NGRAPH_MKLDNN_SHORT_VERSION 1)
21-
set(NGRAPH_MKLDNN_FULL_VERSION 1.0.0.0)
22-
set(NGRAPH_MKLDNN_VERSION "v1.0")
23-
set(NGRAPH_MKLDNN_SUB_VERSION "2019.0.5.20190502")
24-
set(NGRAPH_MKLDNN_GIT_TAG "553c23f")
21+
set(NGRAPH_MKLDNN_FULL_VERSION 1.0.4.0)
22+
set(NGRAPH_MKLDNN_MKLML_ASSET_VERSION "v0.21")
23+
set(NGRAPH_MKLDNN_VERSION "v1.0.4")
24+
set(NGRAPH_MKLDNN_MKLML_VERSION "2019.0.5.20190502")
25+
set(NGRAPH_MKLDNN_MKLML_WIN32_VERSION "2020.0.20190813")
26+
set(NGRAPH_MKLDNN_GIT_TAG "v1.0.4")
2527

2628
#------------------------------------------------------------------------------
2729
# Fetch and install MKL-DNN
@@ -88,17 +90,18 @@ endif()
8890

8991
# This section sets up MKL as an external project to be used later by MKLDNN
9092

91-
set(MKLURLROOT "https://github.com/intel/mkl-dnn/releases/download/v0.19-rc/")
92-
set(MKLVERSION ${NGRAPH_MKLDNN_SUB_VERSION})
93+
set(MKLURLROOT "https://github.com/intel/mkl-dnn/releases/download/${NGRAPH_MKLDNN_MKLML_ASSET_VERSION}/")
94+
set(MKLVERSION ${NGRAPH_MKLDNN_MKLML_VERSION})
95+
set(MKLWIN32VERSION ${NGRAPH_MKLDNN_MKLML_WIN32_VERSION})
9396
if (LINUX)
9497
set(MKLPACKAGE "mklml_lnx_${MKLVERSION}.tgz")
9598
set(MKL_SHA1_HASH 6ab490f0b358124338d04ee9383c3cbc536969d8)
9699
elseif (APPLE)
97100
set(MKLPACKAGE "mklml_mac_${MKLVERSION}.tgz")
98101
set(MKL_SHA1_HASH a1c42af04f990b0e515a1c31946424b2e68fccc9)
99102
elseif (WIN32)
100-
set(MKLPACKAGE "mklml_win_${MKLVERSION}.zip")
101-
set(MKL_SHA1_HASH 9d6ff4d5a486689338158093e96c43ee442b65f0)
103+
set(MKLPACKAGE "mklml_win_${MKLWIN32VERSION}.zip")
104+
set(MKL_SHA1_HASH cc117093e658d50a8e4e3d1cf192c300b6bac0fc)
102105
endif()
103106
set(MKL_LIBS ${MKLML_LIB} ${OMP_LIB})
104107
set(MKLURL ${MKLURLROOT}${MKLPACKAGE})

cmake/mkldnn_v1.patch

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -63,18 +63,18 @@ index 99970659..ef88a0a7 100644
6363
# Compilation happens with OpenMP to enable `#pragma omp simd`
6464
# but during linkage OpenMP dependency should be avoided
6565
diff --git a/src/CMakeLists.txt b/src/CMakeLists.txt
66-
index 60bb0c94..cc3fc9d6 100644
66+
index f99ec31ce..b3c1d9bb8 100644
6767
--- a/src/CMakeLists.txt
6868
+++ b/src/CMakeLists.txt
6969
@@ -73,8 +73,10 @@ endif()
7070
add_library(${LIB_NAME}
7171
${MKLDNN_LIBRARY_TYPE} ${HEADERS} ${${LIB_NAME}_SUB_OBJS})
7272

73-
-set_property(TARGET ${LIB_NAME} PROPERTY VERSION "${PROJECT_VERSION}.0")
74-
-set_property(TARGET ${LIB_NAME} PROPERTY SOVERSION "0")
73+
-set_property(TARGET ${LIB_NAME} PROPERTY VERSION "${MKLDNN_VERSION_MAJOR}.${MKLDNN_VERSION_MINOR}")
74+
-set_property(TARGET ${LIB_NAME} PROPERTY SOVERSION "${MKLDNN_VERSION_MAJOR}")
7575
+if(MKLDNN_LIB_VERSIONING_ENABLE)
76-
+ set_property(TARGET ${LIB_NAME} PROPERTY VERSION "${PROJECT_VERSION}.0")
77-
+ set_property(TARGET ${LIB_NAME} PROPERTY SOVERSION "0")
76+
+ set_property(TARGET ${LIB_NAME} PROPERTY VERSION "${MKLDNN_VERSION_MAJOR}.${MKLDNN_VERSION_MINOR}")
77+
+ set_property(TARGET ${LIB_NAME} PROPERTY SOVERSION "${MKLDNN_VERSION_MAJOR}")
7878
+endif()
7979
set_property(TARGET ${LIB_NAME} PROPERTY PUBLIC_HEADER ${HEADERS})
8080

doc/examples/CMakeLists.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -17,6 +17,7 @@
1717
if (NGRAPH_CPU_ENABLE)
1818
add_subdirectory(abc)
1919
add_subdirectory(abc_operator)
20+
add_subdirectory(dynamic_tensor)
2021
add_subdirectory(mnist_mlp)
2122
add_subdirectory(update)
2223
endif()

doc/examples/abc/abc.cpp

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -50,17 +50,17 @@ int main()
5050
float v_b[2][3] = {{7, 8, 9}, {10, 11, 12}};
5151
float v_c[2][3] = {{1, 0, -1}, {-1, 1, 2}};
5252

53-
t_a->write(&v_a, 0, sizeof(v_a));
54-
t_b->write(&v_b, 0, sizeof(v_b));
55-
t_c->write(&v_c, 0, sizeof(v_c));
53+
t_a->write(&v_a, sizeof(v_a));
54+
t_b->write(&v_b, sizeof(v_b));
55+
t_c->write(&v_c, sizeof(v_c));
5656

5757
// Invoke the function
5858
auto exec = backend->compile(f);
5959
exec->call({t_result}, {t_a, t_b, t_c});
6060

6161
// Get the result
6262
float r[2][3];
63-
t_result->read(&r, 0, sizeof(r));
63+
t_result->read(&r, sizeof(r));
6464

6565
std::cout << "[" << std::endl;
6666
for (size_t i = 0; i < s[0]; ++i)

doc/examples/abc_operator/abc_operator.cpp

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -49,17 +49,17 @@ int main()
4949
float v_b[2][3] = {{7, 8, 9}, {10, 11, 12}};
5050
float v_c[2][3] = {{1, 0, -1}, {-1, 1, 2}};
5151

52-
t_a->write(&v_a, 0, sizeof(v_a));
53-
t_b->write(&v_b, 0, sizeof(v_b));
54-
t_c->write(&v_c, 0, sizeof(v_c));
52+
t_a->write(&v_a, sizeof(v_a));
53+
t_b->write(&v_b, sizeof(v_b));
54+
t_c->write(&v_c, sizeof(v_c));
5555

5656
// Invoke the function
5757
auto exec = backend->compile(f);
5858
exec->call({t_result}, {t_a, t_b, t_c});
5959

6060
// Get the result
6161
float r[2][3];
62-
t_result->read(&r, 0, sizeof(r));
62+
t_result->read(&r, sizeof(r));
6363

6464
std::cout << "[" << std::endl;
6565
for (size_t i = 0; i < s[0]; ++i)
Lines changed: 19 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,19 @@
1+
# ******************************************************************************
2+
# Copyright 2017-2019 Intel Corporation
3+
#
4+
# Licensed under the Apache License, Version 2.0 (the "License");
5+
# you may not use this file except in compliance with the License.
6+
# You may obtain a copy of the License at
7+
#
8+
# http://www.apache.org/licenses/LICENSE-2.0
9+
#
10+
# Unless required by applicable law or agreed to in writing, software
11+
# distributed under the License is distributed on an "AS IS" BASIS,
12+
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13+
# See the License for the specific language governing permissions and
14+
# limitations under the License.
15+
# ******************************************************************************
16+
17+
add_executable(partial_shape partial_shape.cpp)
18+
add_dependencies(partial_shape ngraph cpu_backend)
19+
target_link_libraries(partial_shape ngraph cpu_backend)

doc/examples/dynamic_tensor/partial_shape.cpp

Lines changed: 37 additions & 25 deletions
Original file line numberDiff line numberDiff line change
@@ -15,54 +15,66 @@
1515
//*****************************************************************************
1616

1717
#include <iostream>
18+
#include <numeric>
19+
#include <vector>
1820

1921
#include <ngraph/ngraph.hpp>
2022

23+
using namespace std;
2124
using namespace ngraph;
2225

26+
void execute(shared_ptr<runtime::Backend> be,
27+
shared_ptr<runtime::Executable> ex,
28+
shared_ptr<runtime::Tensor> t_out,
29+
uint32_t n);
30+
2331
int main()
2432
{
2533
// Create and compile a graph where the provided info of shape of x is
2634
// (2,?)
2735
auto x_shape_info = PartialShape{2, Dimension::dynamic()};
2836
auto x = make_shared<op::Parameter>(element::i32, x_shape_info);
2937
auto a = x + x;
30-
auto f = make_shared<Function>({a}, {x});
31-
auto be = runtime::backend::create();
38+
auto f = make_shared<Function>(OutputVector{a}, ParameterVector{x});
39+
auto be = runtime::Backend::create("CPU", true);
3240
auto ex = be->compile(f);
3341

3442
// Create a dynamic tensor of shape (2,?)
3543
auto t_out = be->create_dynamic_tensor(element::i32, x_shape_info);
44+
execute(be, ex, t_out, 3);
45+
execute(be, ex, t_out, 11);
46+
execute(be, ex, t_out, 20);
3647

37-
// Call the graph to write a value with shape (2,3) to t_out
38-
auto t_in = be->create_tensor(element::i32, Shape{2, 3});
39-
t_in->write();
40-
ex->call({t_out}, {t_in})
41-
42-
// Call the graph again, to write a value with a different shape to
43-
// t_out.
44-
t_in = be->create_tensor(element::i32, Shape{2, 20});
45-
t_in->write();
46-
ex->call({t_out}, {t_in})
47-
48-
// Get the result. At this point t_out->get_shape() would return
49-
// Shape{2,20},
50-
// but t_out->get_partial_shape() would return "(2,?)"
48+
return 0;
49+
}
5150

52-
float r[2][3];
53-
t_result->read(&r, 0, sizeof(r));
51+
void execute(shared_ptr<runtime::Backend> be,
52+
shared_ptr<runtime::Executable> ex,
53+
shared_ptr<runtime::Tensor> t_out,
54+
uint32_t n)
55+
{
56+
// Initialize input of shape (2, n)
57+
auto t_in = be->create_tensor(element::i32, Shape{2, n});
58+
{
59+
vector<int32_t> t_val(2 * n);
60+
iota(t_val.begin(), t_val.end(), 0);
61+
t_in->write(&t_val[0], t_val.size() * sizeof(t_val[0]));
62+
}
63+
// Get the result
64+
ex->call({t_out}, {t_in});
5465

55-
std::cout << "[" << std::endl;
66+
auto s = t_out->get_shape();
67+
vector<int32_t> r(s[0] * s[1]);
68+
t_out->read(&r[0], r.size() * sizeof(r[0]));
69+
cout << "[" << endl;
5670
for (size_t i = 0; i < s[0]; ++i)
5771
{
58-
std::cout << " [";
72+
cout << " [";
5973
for (size_t j = 0; j < s[1]; ++j)
6074
{
61-
std::cout << r[i][j] << ' ';
75+
cout << r[i * s[1] + j] << ' ';
6276
}
63-
std::cout << ']' << std::endl;
77+
cout << ']' << endl;
6478
}
65-
std::cout << ']' << std::endl;
66-
67-
return 0;
79+
cout << ']' << endl;
6880
}

doc/examples/mnist_mlp/CMakeLists.txt

Lines changed: 5 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -17,9 +17,8 @@
1717
add_executable(mnist_mlp mnist_loader.cpp mnist_mlp.cpp)
1818
add_dependencies(mnist_mlp ngraph cpu_backend)
1919
target_link_libraries(mnist_mlp ngraph cpu_backend)
20-
if (NGRAPH_DISTRIBUTED_ENABLE)
21-
add_executable(dist_mnist_mlp mnist_loader.cpp dist_mnist_mlp.cpp)
22-
target_compile_definitions(dist_mnist_mlp PRIVATE NGRAPH_DISTRIBUTED_ENABLE)
23-
target_include_directories(dist_mnist_mlp SYSTEM PRIVATE libmlsl)
24-
target_link_libraries(dist_mnist_mlp ngraph cpu_backend libmlsl)
25-
endif()
20+
21+
add_executable(dist_mnist_mlp mnist_loader.cpp dist_mnist_mlp.cpp)
22+
target_compile_definitions(dist_mnist_mlp PRIVATE NGRAPH_DISTRIBUTED_ENABLE)
23+
add_dependencies(dist_mnist_mlp ngraph cpu_backend)
24+
target_link_libraries(dist_mnist_mlp ngraph cpu_backend)

0 commit comments

Comments
 (0)