NeuZephyr
Simple DL Framework
nz::nodes::calc::ScalarSubNode Class Reference

Represents a scalar subtraction operation node in a computational graph. More...

Inheritance diagram for nz::nodes::calc::ScalarSubNode:
Collaboration diagram for nz::nodes::calc::ScalarSubNode:

Public Member Functions

 ScalarSubNode (Node *input, Tensor::value_type scalar)
 Constructor to initialize a ScalarSubNode for scalar subtraction.
 
void forward () override
 Forward pass for the ScalarSubNode to perform scalar subtraction.
 
void backward () override
 Backward pass for the ScalarSubNode to propagate gradients.
 
- Public Member Functions inherited from nz::nodes::Node
virtual void print (std::ostream &os) const
 Prints the type, data, and gradient of the node.
 
void dataInject (Tensor::value_type *data, bool grad=false) const
 Injects data into a relevant tensor object, optionally setting its gradient requirement.
 
template<typename Iterator >
void dataInject (Iterator begin, Iterator end, const bool grad=false) const
 Injects data from an iterator range into the output tensor of the InputNode, optionally setting its gradient requirement.
 
void dataInject (const std::initializer_list< Tensor::value_type > &data, bool grad=false) const
 Injects data from a std::initializer_list into the output tensor of the Node, optionally setting its gradient requirement.
 

Detailed Description

Represents a scalar subtraction operation node in a computational graph.

The ScalarSubNode class performs element-wise subtraction of a scalar value from a tensor. It is commonly used in computational graphs to offset tensor values or perform subtraction-based normalization tasks.

Key features:

  • Forward Pass: Subtracts a scalar value from each element of the input tensor and stores the result in the output tensor.
  • Backward Pass: Propagates gradients from the output tensor back to the input tensor. Since the derivative of subtraction with respect to the input is 1, the gradient from the output tensor is directly transferred to the input tensor.
  • Shape Preservation: Maintains the shape of the input tensor in the output tensor.
  • Gradient Management: Tracks whether gradients are required for the operation based on the properties of the input tensor.

This class is part of the nz::nodes namespace and facilitates scalar-tensor subtraction operations in computational graphs.

Note
  • The scalar value is applied consistently across all elements of the input tensor.
  • A warning is issued indicating that scalar operations do not support saving to files, and users are encouraged to use matrix operations for model persistence.

Usage Example:

// Example: Using ScalarSubNode for scalar subtraction
InputNode input({3, 3}, true); // Create an input node with shape {3, 3}
input.output->fill(10.0f); // Fill the input tensor with value 10.0
ScalarSubNode scalar_sub_node(&input, 5.0f); // Subtract 5.0 from the input tensor
scalar_sub_node.forward(); // Perform the forward pass
scalar_sub_node.backward(); // Propagate gradients in the backward pass
std::cout << "Output: " << *scalar_sub_node.output << std::endl; // Print the result
Represents a scalar subtraction operation node in a computational graph.
Definition Nodes.cuh:1640
See also
forward() for the scalar subtraction computation in the forward pass.
backward() for gradient propagation in the backward pass.
Warning
  • Scalar operations are not yet supported for saving to files. Use matrix operations as an alternative.
Author
Mgepahmge (https://github.com/Mgepahmge)
Date
2024/12/05

Definition at line 1640 of file Nodes.cuh.

Constructor & Destructor Documentation

◆ ScalarSubNode()

nz::nodes::calc::ScalarSubNode::ScalarSubNode ( Node * input,
Tensor::value_type scalar )

Constructor to initialize a ScalarSubNode for scalar subtraction.

The constructor initializes a ScalarSubNode, which performs element-wise subtraction of a scalar value from the elements of the input tensor. It establishes the connection between the input node and this node, prepares the output tensor with the appropriate shape and properties, and stores the negated scalar value for use during forward and backward passes.

Parameters
inputA pointer to the input node. Its output tensor will have the scalar value subtracted from it.
scalarThe scalar value to subtract from each element of the input tensor.
  • The input node is added to the inputs vector to establish the connection in the computational graph.
  • The output tensor is initialized with the same shape as the input tensor, and the requires_grad property is determined based on the input tensor's gradient requirements.
  • The scalar value is negated and stored internally for efficient computation during the forward pass.
  • A warning is issued indicating that scalar operations do not support saving to files, encouraging the use of matrix operations for models requiring persistence.
Note
  • The negation of the scalar value simplifies computation during the forward pass, treating subtraction as addition with a negated scalar.
  • This node supports automatic gradient tracking if the input tensor requires gradients.
See also
forward() for the forward pass implementation.
backward() for gradient propagation in the backward pass.
Warning
  • Scalar operations are not yet supported for saving to files. Use matrix operations as an alternative.
Author
Mgepahmge (https://github.com/Mgepahmge)
Date
2024/12/05

Definition at line 259 of file Nodes.cu.

Member Function Documentation

◆ backward()

void nz::nodes::calc::ScalarSubNode::backward ( )
overridevirtual

Backward pass for the ScalarSubNode to propagate gradients.

The backward() method propagates the gradient of the loss from the output tensor back to the input tensor. Since the derivative of subtraction with respect to the input is 1, the gradient from the output tensor is directly copied to the input tensor's gradient.

  • The method checks if the input tensor requires gradients. If true, the gradient of the output tensor is copied directly to the gradient of the input tensor using cudaMemcpy.
  • This operation ensures efficient gradient propagation without requiring additional computation.
Note
  • The backward pass assumes that the gradient of the output tensor is already computed and properly initialized.
  • The subtraction operation does not alter the gradient values, enabling a straightforward gradient transfer.
See also
forward() for the scalar subtraction computation in the forward pass.
Author
Mgepahmge (https://github.com/Mgepahmge)
Date
2024/12/05

Implements nz::nodes::Node.

Definition at line 275 of file Nodes.cu.

Here is the call graph for this function:

◆ forward()

void nz::nodes::calc::ScalarSubNode::forward ( )
overridevirtual

Forward pass for the ScalarSubNode to perform scalar subtraction.

The forward() method computes the element-wise subtraction of a scalar value from the input tensor. Internally, it utilizes the addition kernel (ScalarAdd) by treating the subtraction as addition with a negated scalar value, which was preprocessed during node construction.

  • A CUDA kernel (ScalarAdd) is launched to add the negated scalar value to each element of the input tensor.
  • The grid and block dimensions are dynamically calculated based on the size of the output tensor to optimize GPU parallelism.
  • The result of the operation is stored in the output tensor.
Note
  • The subtraction operation is effectively performed as output[i] = input[i] - scalar, achieved by using output[i] = input[i] + (-scalar) for efficiency.
  • The scalar value was negated during construction, making this method consistent with the addition kernel.
See also
backward() for gradient propagation in the backward pass.
Author
Mgepahmge (https://github.com/Mgepahmge)
Date
2024/12/05

Implements nz::nodes::Node.

Definition at line 269 of file Nodes.cu.

Here is the call graph for this function:

The documentation for this class was generated from the following files: