![]() |
NeuZephyr
Simple DL Framework
|
Performs global max pooling operation across spatial dimensions of input tensor. More...
Public Member Functions | |
GlobalMaxPoolNode (Node *input) | |
Constructs a GlobalMaxPoolNode object. | |
void | forward () override |
Performs the forward pass of the global max - pooling operation. | |
void | backward () override |
Performs the backward pass of the global max - pooling operation. | |
![]() | |
virtual void | print (std::ostream &os) const |
Prints the type, data, and gradient of the node. | |
void | dataInject (Tensor::value_type *data, bool grad=false) const |
Injects data into a relevant tensor object, optionally setting its gradient requirement. | |
template<typename Iterator > | |
void | dataInject (Iterator begin, Iterator end, const bool grad=false) const |
Injects data from an iterator range into the output tensor of the InputNode, optionally setting its gradient requirement. | |
void | dataInject (const std::initializer_list< Tensor::value_type > &data, bool grad=false) const |
Injects data from a std::initializer_list into the output tensor of the Node, optionally setting its gradient requirement. | |
Performs global max pooling operation across spatial dimensions of input tensor.
This node reduces each channel's spatial dimensions (H, W) to a single maximum value, producing output of shape (N, C, 1, 1). Used to extract the most salient spatial features while maintaining channel-wise information.
Core functionality and characteristics:
Key implementation aspects:
Typical use cases:
Critical considerations:
nz::nodes::calc::GlobalMaxPoolNode::GlobalMaxPoolNode | ( | Node * | input | ) |
Constructs a GlobalMaxPoolNode object.
input | A pointer to the input node. This pointer is assumed to be managed externally, and the constructor uses it in a read - only manner (host - to - host). |
This constructor initializes a GlobalMaxPoolNode object. It first adds the provided input node pointer to the inputs
vector. Then, it creates a new shared pointer for the output
member. The shape of the output tensor is set to have the same batch size and number of channels as the input tensor's output, but with a height and width of 1. The requiresGrad
flag of the output tensor is set to the same value as that of the input tensor's output. Finally, it sets the type
member to the string "GlobalMaxPool".
Memory management strategy: The constructor does not allocate memory for the input node. It only stores a pointer to it. The output
tensor is created using std::make_shared
, which manages its memory automatically. Exception handling mechanism: There is no explicit exception handling in this constructor. If the std::make_shared
call fails to allocate memory for the output
tensor, it may throw a std::bad_alloc
exception.
std::bad_alloc | If memory allocation for the output tensor fails. |
output
tensor, which has a time complexity of O(1) for the pointer management and O(m) for the tensor data allocation, where m is the number of elements in the output tensor.
|
overridevirtual |
Performs the backward pass of the global max - pooling operation.
None |
This function performs the backward pass of the global max - pooling operation. First, it checks if the output tensor of the input node requires gradient computation. If so, it retrieves the host data and gradients of the output tensor. Then, it iterates over each batch and channel of the input tensor's output. For each combination of batch index i
and channel index j
, it calculates an index idx
and uses the find
method to locate the position of the maximum value in the input tensor corresponding to the output value at idx
. Finally, it sets the gradient at that position in the input tensor using the setData
method.
Memory management strategy: This function does not allocate or deallocate any memory directly. It operates on the existing data and gradient tensors of the input and output. Exception handling mechanism: There is no explicit exception handling in this function. If the hostData
, hostGrad
, find
, or setData
methods encounter an error, they may throw an exception depending on their implementation.
[Exception | type from hostData, hostGrad, find, or setData] If the hostData , hostGrad , find , or setData methods encounter an error during execution. |
b
is the batch size (inputs[0]->output->shape()[0]
) and c
is the number of channels (inputs[0]->output->shape()[1]
).Implements nz::nodes::Node.
|
overridevirtual |
Performs the forward pass of the global max - pooling operation.
None |
This function conducts the forward pass of the global max - pooling operation. It iterates over each batch and channel of the input tensor's output. For each combination of batch index i
and channel index j
, it computes the maximum value in the corresponding slice of the input tensor using the max
method. Then, it fills the corresponding position in the output tensor using the fillMatrix
method.
Memory management strategy: This function does not allocate or deallocate any memory directly. It operates on the existing data tensors of the input and output. Exception handling mechanism: There is no explicit exception handling in this function. If the max
or fillMatrix
methods encounter an error, they may throw an exception depending on their implementation.
[Exception | type from max or fillMatrix] If the max or fillMatrix methods encounter an error during execution. |
b
is the batch size (inputs[0]->output->shape()[0]
) and c
is the number of channels (inputs[0]->output->shape()[1]
).Implements nz::nodes::Node.