Skip to content

tensorplay.nn.functional

The functional module provides a stateless interface for neural network operations. Unlike nn.Module classes, these functions do not hold state (parameters) and must be passed all necessary weights and biases as arguments.

Overview

This module is frequently used when:

  • Implementing custom layers in forward methods.
  • Using operations that don't have learnable parameters (e.g., relu, max_pool2d).
  • Applying complex transformations that require fine-grained control over inputs.

Classes

class Tensor

python
Tensor(*args, **kwargs)
Methods

cpu(self) [source]

Returns a copy of this object in CPU memory. If this object is already in CPU memory, then no copy is performed and the original object is returned.


cuda(self, device=None, non_blocking=False) [source]

Returns a copy of this object in CUDA memory. If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.


double(self) [source]


flatten(self, start_dim=0, end_dim=-1) [source]

Flattens a contiguous range of dims.


float(self) [source]


int(self) [source]


is_float(self) -> bool [source]

Check if tensor is floating point.


long(self) [source]


ndimension(self) -> int [source]

Alias for dim()


t(self) [source]

Returns the transpose of the tensor. Aliased to transpose(0, 1) to ensure correct autograd behavior (TransposeBackward).


type(self, dtype=None, non_blocking=False, **kwargs) [source]

Returns the type if dtype is not provided, else casts this object to the specified type.


unflatten(self, dim, sizes) [source]

Expands a dimension of the input tensor over multiple dimensions.


Functions

adaptive_avg_pool2d() [source]

python
adaptive_avg_pool2d(input, output_size)

adaptive_max_pool2d() [source]

python
adaptive_max_pool2d(input, output_size)

alpha_dropout() [source]

python
alpha_dropout(input, p=0.5, training=True, inplace=False)

avg_pool2d() [source]

python
avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None)

batch_norm() [source]

python
batch_norm(input, running_mean=None, running_var=None, weight=None, bias=None, training=False, momentum=0.1, eps=1e-05)

bilinear() [source]

python
bilinear(input1, input2, weight, bias=None)

conv1d() [source]

python
conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)

Applies a 1D convolution over an input signal composed of several input planes.

See ~tensorplay.nn.Conv1d for details and output shape.

Args

  • input: input tensor of shape (minibatch,in_channels,iW)
  • weight: filters of shape (out_channels,in_channelsgroups,kW)
  • bias: optional bias of shape (out_channels). Default: None
  • stride: the stride of the convolving kernel. Can be a single number or a one-element tuple (sW,). Default: 1
  • padding: implicit paddings on both sides of the input. Can be a single number or a one-element tuple (padW,). Default: 0
  • dilation: the spacing between kernel elements. Can be a single number or a one-element tuple (dW,). Default: 1
  • groups: split input into groups, in_channels should be divisible by the number of groups. Default: 1

Examples

python
inputs = tp.randn(33, 16, 30)
filters = tp.randn(20, 16, 5)
F.conv1d(inputs, filters)

conv2d() [source]

python
conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)

Applies a 2D convolution over an input image composed of several input planes.

See ~tensorplay.nn.Conv2d for details and output shape.

Args

  • input: input tensor of shape (minibatch,in_channels,iH,iW)
  • weight: filters of shape (out_channels,in_channelsgroups,kH,kW)
  • bias: optional bias tensor of shape (out_channels). Default: None
  • stride: the stride of the convolving kernel. Can be a single number or a tuple (sH, sW). Default: 1
  • padding: implicit paddings on both sides of the input. Can be a single number or a tuple (padH, padW). Default: 0
  • dilation: the spacing between kernel elements. Can be a single number or a tuple (dH, dW). Default: 1
  • groups: split input into groups, both in_channels and out_channels should be divisible by the number of groups. Default: 1

Examples

python
# With square kernels and equal stride
filters = tp.randn(8, 4, 3, 3)
inputs = tp.randn(1, 4, 5, 5)
F.conv2d(inputs, filters, padding=1)

conv3d() [source]

python
conv3d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)

Applies a 3D convolution over an input image composed of several input planes.

See ~tensorplay.nn.Conv3d for details and output shape.

Args

  • input: input tensor of shape (minibatch,in_channels,iD,iH,iW)
  • weight: filters of shape (out_channels,in_channelsgroups,kD,kH,kW)
  • bias: optional bias tensor of shape (out_channels). Default: None
  • stride: the stride of the convolving kernel. Can be a single number or a tuple (sD, sH, sW). Default: 1
  • padding: implicit paddings on both sides of the input. Can be a single number or a tuple (padD, padH, padW). Default: 0
  • dilation: the spacing between kernel elements. Can be a single number or a tuple (dD, dH, dW). Default: 1
  • groups: split input into groups, both in_channels and out_channels should be divisible by the number of groups. Default: 1

Examples

python
# With square kernels and equal stride
filters = tp.randn(8, 4, 3, 3, 3)
inputs = tp.randn(1, 4, 5, 5, 5)
F.conv3d(inputs, filters, padding=1)

conv_transpose2d() [source]

python
conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1)

conv_transpose3d() [source]

python
conv_transpose3d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1)

cross_entropy() [source]

python
cross_entropy(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean', label_smoothing=0.0)

dropout() [source]

python
dropout(input, p=0.5, training=True, inplace=False)

dropout2d() [source]

python
dropout2d(input, p=0.5, training=True, inplace=False)

dropout3d() [source]

python
dropout3d(input, p=0.5, training=True, inplace=False)

embedding() [source]

python
embedding(input, weight, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False)

flatten() [source]

python
flatten(input, start_dim=0, end_dim=-1)

group_norm() [source]

python
group_norm(input, num_groups, weight=None, bias=None, eps=1e-05)

instance_norm() [source]

python
instance_norm(input, running_mean=None, running_var=None, weight=None, bias=None, use_input_stats=True, momentum=0.1, eps=1e-05)

layer_norm() [source]

python
layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05)

linear() [source]

python
linear(input, weight, bias=None)

log_softmax() [source]

python
log_softmax(input, dim=None, dtype=None)

max_pool2d() [source]

python
max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)

mse_loss() [source]

python
mse_loss(input, target, reduction='mean')

nll_loss() [source]

python
nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')

pad() [source]

python
pad(input, pad, mode='constant', value=0)

prelu() [source]

python
prelu(input, weight)

relu() [source]

python
relu(input, inplace=False)

silu() [source]

python
silu(input: tensorplay._C.TensorBase, inplace: bool = False) -> tensorplay._C.TensorBase

Apply the Sigmoid Linear Unit (SiLU) function, element-wise.

The SiLU function is also known as the swish function.

.. math

python
\text{silu}(x) = x * \sigma(x), \text{where } \sigma(x) \text{ is the logistic sigmoid.}

INFO

See Gaussian Error Linear Units (GELUs) <https://arxiv.org/abs/1606.08415>_ where the SiLU (Sigmoid Linear Unit) was originally coined, and see Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning <https://arxiv.org/abs/1702.03118>_ and Swish: a Self-Gated Activation Function <https://arxiv.org/abs/1710.05941v1>_ where the SiLU was experimented with later.

See ~tensorplay.nn.SiLU for more details.

softmax() [source]

python
softmax(input, dim=None, dtype=None)

threshold() [source]

python
threshold(input: tensorplay._C.TensorBase, threshold: float, value: float, inplace: bool = False) -> tensorplay._C.TensorBase

Apply a threshold to each element of the input Tensor.

See ~tensorplay.nn.Threshold for more details.

Released under the Apache 2.0 License.

📚DeepWiki