![]() |
NeuZephyr
Simple DL Framework
|
Represents a Rectified Linear Unit (ReLU) operation node in a computational graph. More...
Public Member Functions | |
ReLUNode (Node *input) | |
Constructor to initialize a ReLUNode for applying the ReLU activation function. | |
void | forward () override |
Forward pass for the ReLUNode to apply the ReLU activation function. | |
void | backward () override |
Backward pass for the ReLUNode to compute gradients. | |
![]() | |
virtual void | print (std::ostream &os) const |
Prints the type, data, and gradient of the node. | |
void | dataInject (Tensor::value_type *data, bool grad=false) const |
Injects data into a relevant tensor object, optionally setting its gradient requirement. | |
template<typename Iterator > | |
void | dataInject (Iterator begin, Iterator end, const bool grad=false) const |
Injects data from an iterator range into the output tensor of the InputNode, optionally setting its gradient requirement. | |
void | dataInject (const std::initializer_list< Tensor::value_type > &data, bool grad=false) const |
Injects data from a std::initializer_list into the output tensor of the Node, optionally setting its gradient requirement. | |
Represents a Rectified Linear Unit (ReLU) operation node in a computational graph.
The ReLUNode
class applies the ReLU activation function to the input tensor. ReLU is a commonly used non-linear activation function in neural networks, defined as ReLU(x) = max(0, x)
. It introduces non-linearity and sparsity into the network.
Key features:
This class is part of the nz::nodes
namespace and is typically used in constructing neural network models to introduce non-linearity between layers.
|
explicit |
Constructor to initialize a ReLUNode
for applying the ReLU activation function.
The constructor initializes a ReLUNode
, which applies the Rectified Linear Unit (ReLU) activation function to an input tensor. It establishes a connection to the input node, initializes the output tensor, and sets the type of the node to "ReLU".
input | A pointer to the input node. Its output tensor will have the ReLU activation applied. |
inputs
vector to establish the connection in the computational graph.output
tensor is initialized with the same shape as the input tensor, and its gradient tracking is determined based on the input tensor's requirements.ReLU(x) = max(0, x)
. This will be applied during the forward pass.
|
overridevirtual |
Backward pass for the ReLUNode
to compute gradients.
The backward()
method computes the gradient of the loss with respect to the input tensor by applying the derivative of the ReLU activation function. Gradients are propagated only for elements where the input tensor values are positive; otherwise, the gradients are set to zero.
ReLUBackward
) is launched to compute the gradients in parallel on the GPU.output
tensor.requiresGrad
property is true.Implements nz::nodes::Node.
Definition at line 357 of file Nodes.cu.
|
overridevirtual |
Forward pass for the ReLUNode
to apply the ReLU activation function.
The forward()
method applies the Rectified Linear Unit (ReLU) activation function element-wise to the input tensor. Values less than zero are set to zero, while non-negative values remain unchanged. The results are stored in the output
tensor.
RectifiedLinearUnit
) is launched to compute the ReLU activation in parallel on the GPU.output
tensor to optimize GPU performance.ReLU(x) = max(0, x)
for each element of the input tensor.Implements nz::nodes::Node.
Definition at line 351 of file Nodes.cu.