Intel® oneAPI Deep Neural Network Developer Guide and Reference
A newer version of this document is available. Customers should click here to go to the newest version.
Visible to Intel only — GUID: GUID-EFCC8870-C7B5-479C-B6AE-FE8CB857F852
Visible to Intel only — GUID: GUID-EFCC8870-C7B5-479C-B6AE-FE8CB857F852
struct dnnl_graph_inplace_pair_t
Overview
In-place pair definition. More…
#include <dnnl_graph_types.h> struct dnnl_graph_inplace_pair_t { // fields size_t input_id; size_t output_id; };
Detailed Documentation
In-place pair definition.
It can queried from a compiled partition indicating that an input and an output of the partition can share the same memory buffer for computation. In-place computation helps to reduce the memory footprint and improves cache locality. But since the library may not have a global view of user’s application, it’s possible that the tensor with input_id is used at other places in user’s computation graph. In this case, the user should take the in-place pair as a hint and pass a different memory buffer for output tensor to avoid overwriting the input memory buffer which will probably cause unexpected incorrect results.
Fields
size_t input_id
The id of input tensor.
size_t output_id
The id of output tensor.