Skip to content

Commit 66a90d2

Browse files
authored
feat: SOFIE information in ROOT Manual (root-project#874)
1 parent c07520a commit 66a90d2

File tree

1 file changed

+56
-0
lines changed

1 file changed

+56
-0
lines changed

manual/tmva/index.md

Lines changed: 56 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -140,3 +140,59 @@ You can, for example, plot input variable distributions.
140140
caption="Example plots for input variable distributions."
141141
%}
142142

143+
## SOFIE
144+
145+
SOFIE (System for Optimized Fast Inference code Emit) generates C++ functions easily invokable for the fast inference of trained neural network models. It takes ONNX model files as inputs and produces C++ header files that can be included and utilized in a “plug-and-go” style.
146+
147+
This is a new development in TMVA and is currently in early experimental stage. Bug reports and suggestions for improvements are warmly welcomed.
148+
149+
#### Prerequisite
150+
- Protobuf 3.0 or higher (for input of ONNX model files)
151+
- BLAS or Eigen (for execution of the generated code for inference)
152+
153+
#### Installation
154+
Build ROOT with the cmake option tmva-sofie enabled.
155+
156+
```
157+
cmake ../root -Dtmva-sofie=ON
158+
make -j8
159+
```
160+
161+
#### Usage
162+
SOFIE works in a parser-generator working architecture. With SOFIE, the user gets an ONNX, Keras and a PyTorch parser for translating models in respective formats into SOFIE's internal representation.
163+
164+
From ROOT command line, or in a ROOT macro, we can proceed with an ONNX model:
165+
```
166+
using namespace TMVA::Experimental;
167+
SOFIE::RModelParser_ONNX parser;
168+
SOFIE::RModel model = parser.Parse(“./example_model.onnx”);
169+
model.Generate();
170+
model.OutputGenerated(“./example_output.hxx”);
171+
```
172+
And an C++ header file and a .dat file containing the model weights will be generated. You can also use
173+
```
174+
model.PrintRequiredInputTensors();
175+
```
176+
to check the required size and type of input tensor for that particular model, and use
177+
```
178+
model.PrintInitializedTensors();
179+
```
180+
to check the tensors (weights) already included in the model.
181+
182+
To use the generated inference code:
183+
```
184+
#include "example_output.hxx"
185+
float input[INPUT_SIZE];
186+
187+
// Generated header file shall contain a Session class which requires initialization to load the corresponding weights.
188+
TMVA_SOFIE_example_model::Session s("example_model.dat")
189+
190+
// Once instantiated the session object's infer method can be used
191+
std::vector<float> out = s.infer(input);
192+
```
193+
With the default settings, the weights are contained in a separate binary file, but if the user instead wants them to be in the generated header file itself, they can use approproiate generation options.
194+
```
195+
model.Generate(Options::kNoWeightFile);
196+
```
197+
Other such options includes Options::kNoSession (for not generating the Session class, and instead keeping the infer function independent).
198+
SOFIE also supports generating inference code with RDataFrame as inputs, refer to the tutorials for examples.

0 commit comments

Comments
 (0)