![]() |
NeuZephyr
Simple DL Framework
|
Implements average pooling operation for spatial downsampling in neural networks. More...
Public Member Functions | |
AveragePoolingNode (Node *input, Tensor::size_type poolSize, Tensor::size_type stride, Tensor::size_type padding) | |
Constructs an AveragePoolingNode object. | |
void | forward () override |
Performs the backward pass of the average pooling operation. | |
void | backward () override |
Performs the backward pass of the average pooling operation. | |
![]() | |
virtual void | print (std::ostream &os) const |
Prints the type, data, and gradient of the node. | |
void | dataInject (Tensor::value_type *data, bool grad=false) const |
Injects data into a relevant tensor object, optionally setting its gradient requirement. | |
template<typename Iterator > | |
void | dataInject (Iterator begin, Iterator end, const bool grad=false) const |
Injects data from an iterator range into the output tensor of the InputNode, optionally setting its gradient requirement. | |
void | dataInject (const std::initializer_list< Tensor::value_type > &data, bool grad=false) const |
Injects data from a std::initializer_list into the output tensor of the Node, optionally setting its gradient requirement. | |
Implements average pooling operation for spatial downsampling in neural networks.
This node performs spatial averaging over sliding windows of (poolSize x poolSize) dimensions, reducing feature map resolution while maintaining channel depth. Commonly used for dimensionality reduction and translation invariance in CNNs.
Core functionality and characteristics:
Key implementation aspects:
Typical use cases:
Critical considerations:
nz::nodes::calc::AveragePoolingNode::AveragePoolingNode | ( | Node * | input, |
Tensor::size_type | poolSize, | ||
Tensor::size_type | stride, | ||
Tensor::size_type | padding ) |
Constructs an AveragePoolingNode object.
input | A pointer to the input node. The memory of this pointer is assumed to be managed externally and is used in a read - only manner within this constructor (host - to - host). |
poolSize | The size of the pooling window. It is a value of type Tensor::size_type and is used to determine the dimensions of the pooling operation. |
stride | The stride value for the pooling operation. It is of type Tensor::size_type and controls how the pooling window moves across the input tensor. |
padding | The padding value applied to the input tensor before the pooling operation. It is of type Tensor::size_type. |
This constructor initializes an AveragePoolingNode object. It first stores the provided input node pointer in the inputs
vector. Then, it creates a new shared pointer to a Tensor object for the output
member. The shape of the output tensor is calculated based on the shape of the input tensor, the poolSize
, stride
, and padding
values using the OUTPUT_DIM
macro. The requiresGrad
flag of the output tensor is set to the same value as that of the input tensor's output. Finally, it sets the type
member of the node to "AveragePooling".
Memory management strategy: The constructor does not allocate memory for the input node. It only stores a pointer to it. The output tensor is created using std::make_shared
, which manages the memory automatically. Exception handling mechanism: There is no explicit exception handling in this constructor. If the std::make_shared
call fails to allocate memory for the output tensor, it may throw a std::bad_alloc
exception.
std::bad_alloc | If memory allocation for the output tensor fails. |
|
overridevirtual |
Performs the backward pass of the average pooling operation.
None |
This function conducts the backward pass of the average pooling operation. It first checks if the output tensor of the input node requires gradient computation. If it does, the function calls iAveragePoolingBackward
, passing the gradient tensor of the input node's output, the gradient tensor of the output, the pooling size, stride, padding, and the dimensions of the input and output tensors. The iAveragePoolingBackward
function computes the gradients and propagates them back to the input.
Memory management strategy: This function does not allocate or deallocate any memory directly. It operates on the existing gradient tensors of the input and output. Exception handling mechanism: There is no explicit exception handling in this function. If the iAveragePoolingBackward
function encounters an error, it may throw an exception, and the specific type of exception depends on the implementation of iAveragePoolingBackward
.
[Exception | type from iAveragePoolingBackward] If the iAveragePoolingBackward function encounters an error during execution. |
iAveragePoolingBackward
function. If the iAveragePoolingBackward
function has a time complexity of O(n), where n is the number of elements in the input or output gradient tensors, then this backward pass also has a time complexity of O(n).Implements nz::nodes::Node.
|
overridevirtual |
Performs the backward pass of the average pooling operation.
None |
This function conducts the backward pass of the average pooling operation. It first checks if the output tensor of the input node requires gradient computation. If it does, the function calls iAveragePoolingBackward
, passing the gradient tensor of the input node's output, the gradient tensor of the output, the pooling size, stride, padding, and the dimensions of the input and output tensors. The iAveragePoolingBackward
function computes the gradients and propagates them back to the input.
Memory management strategy: This function does not allocate or deallocate any memory directly. It operates on the existing gradient tensors of the input and output. Exception handling mechanism: There is no explicit exception handling in this function. If the iAveragePoolingBackward
function encounters an error, it may throw an exception, and the specific type of exception depends on the implementation of iAveragePoolingBackward
.
[Exception | type from iAveragePoolingBackward] If the iAveragePoolingBackward function encounters an error during execution. |
iAveragePoolingBackward
function. If the iAveragePoolingBackward
function has a time complexity of O(n), where n is the number of elements in the input or output gradient tensors, then this backward pass also has a time complexity of O(n).Implements nz::nodes::Node.