tensorplay.nn.functional
The functional module provides a stateless interface for neural network operations. Unlike nn.Module classes, these functions do not hold state (parameters) and must be passed all necessary weights and biases as arguments.
Overview
This module is frequently used when:
- Implementing custom layers in
forwardmethods. - Using operations that don't have learnable parameters (e.g.,
relu,max_pool2d). - Applying complex transformations that require fine-grained control over inputs.
Classes
class Tensor
Tensor(*args, **kwargs)Methods
cpu(self) [source]
Returns a copy of this object in CPU memory. If this object is already in CPU memory, then no copy is performed and the original object is returned.
cuda(self, device=None, non_blocking=False) [source]
Returns a copy of this object in CUDA memory. If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.
double(self) [source]
flatten(self, start_dim=0, end_dim=-1) [source]
Flattens a contiguous range of dims.
float(self) [source]
int(self) [source]
is_float(self) -> bool [source]
Check if tensor is floating point.
long(self) [source]
ndimension(self) -> int [source]
Alias for dim()
t(self) [source]
Returns the transpose of the tensor. Aliased to transpose(0, 1) to ensure correct autograd behavior (TransposeBackward).
type(self, dtype=None, non_blocking=False, **kwargs) [source]
Returns the type if dtype is not provided, else casts this object to the specified type.
unflatten(self, dim, sizes) [source]
Expands a dimension of the input tensor over multiple dimensions.
Functions
adaptive_avg_pool2d() [source]
adaptive_avg_pool2d(input, output_size)adaptive_max_pool2d() [source]
adaptive_max_pool2d(input, output_size)alpha_dropout() [source]
alpha_dropout(input, p=0.5, training=True, inplace=False)avg_pool2d() [source]
avg_pool2d(input, kernel_size, stride=None, padding=0, ceil_mode=False, count_include_pad=True, divisor_override=None)batch_norm() [source]
batch_norm(input, running_mean=None, running_var=None, weight=None, bias=None, training=False, momentum=0.1, eps=1e-05)bilinear() [source]
bilinear(input1, input2, weight, bias=None)conv1d() [source]
conv1d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)Applies a 1D convolution over an input signal composed of several input planes.
See ~tensorplay.nn.Conv1d for details and output shape.
Args
- input: input tensor of shape
- weight: filters of shape
- bias: optional bias of shape
. Default: None - stride: the stride of the convolving kernel. Can be a single number or a one-element tuple
(sW,). Default: 1 - padding: implicit paddings on both sides of the input. Can be a single number or a one-element tuple
(padW,). Default: 0 - dilation: the spacing between kernel elements. Can be a single number or a one-element tuple
(dW,). Default: 1 - groups: split input into groups,
should be divisible by the number of groups. Default: 1
Examples
inputs = tp.randn(33, 16, 30)
filters = tp.randn(20, 16, 5)
F.conv1d(inputs, filters)conv2d() [source]
conv2d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)Applies a 2D convolution over an input image composed of several input planes.
See ~tensorplay.nn.Conv2d for details and output shape.
Args
- input: input tensor of shape
- weight: filters of shape
- bias: optional bias tensor of shape
. Default: None - stride: the stride of the convolving kernel. Can be a single number or a tuple
(sH, sW). Default: 1 - padding: implicit paddings on both sides of the input. Can be a single number or a tuple
(padH, padW). Default: 0 - dilation: the spacing between kernel elements. Can be a single number or a tuple
(dH, dW). Default: 1 - groups: split input into groups, both
and should be divisible by the number of groups. Default: 1
Examples
# With square kernels and equal stride
filters = tp.randn(8, 4, 3, 3)
inputs = tp.randn(1, 4, 5, 5)
F.conv2d(inputs, filters, padding=1)conv3d() [source]
conv3d(input, weight, bias=None, stride=1, padding=0, dilation=1, groups=1)Applies a 3D convolution over an input image composed of several input planes.
See ~tensorplay.nn.Conv3d for details and output shape.
Args
- input: input tensor of shape
- weight: filters of shape
- bias: optional bias tensor of shape
. Default: None - stride: the stride of the convolving kernel. Can be a single number or a tuple
(sD, sH, sW). Default: 1 - padding: implicit paddings on both sides of the input. Can be a single number or a tuple
(padD, padH, padW). Default: 0 - dilation: the spacing between kernel elements. Can be a single number or a tuple
(dD, dH, dW). Default: 1 - groups: split input into groups, both
and should be divisible by the number of groups. Default: 1
Examples
# With square kernels and equal stride
filters = tp.randn(8, 4, 3, 3, 3)
inputs = tp.randn(1, 4, 5, 5, 5)
F.conv3d(inputs, filters, padding=1)conv_transpose2d() [source]
conv_transpose2d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1)conv_transpose3d() [source]
conv_transpose3d(input, weight, bias=None, stride=1, padding=0, output_padding=0, groups=1, dilation=1)cross_entropy() [source]
cross_entropy(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean', label_smoothing=0.0)dropout() [source]
dropout(input, p=0.5, training=True, inplace=False)dropout2d() [source]
dropout2d(input, p=0.5, training=True, inplace=False)dropout3d() [source]
dropout3d(input, p=0.5, training=True, inplace=False)embedding() [source]
embedding(input, weight, padding_idx=None, max_norm=None, norm_type=2.0, scale_grad_by_freq=False, sparse=False)flatten() [source]
flatten(input, start_dim=0, end_dim=-1)group_norm() [source]
group_norm(input, num_groups, weight=None, bias=None, eps=1e-05)instance_norm() [source]
instance_norm(input, running_mean=None, running_var=None, weight=None, bias=None, use_input_stats=True, momentum=0.1, eps=1e-05)layer_norm() [source]
layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05)linear() [source]
linear(input, weight, bias=None)log_softmax() [source]
log_softmax(input, dim=None, dtype=None)max_pool2d() [source]
max_pool2d(input, kernel_size, stride=None, padding=0, dilation=1, ceil_mode=False, return_indices=False)mse_loss() [source]
mse_loss(input, target, reduction='mean')nll_loss() [source]
nll_loss(input, target, weight=None, size_average=None, ignore_index=-100, reduce=None, reduction='mean')pad() [source]
pad(input, pad, mode='constant', value=0)prelu() [source]
prelu(input, weight)relu() [source]
relu(input, inplace=False)silu() [source]
silu(input: tensorplay._C.TensorBase, inplace: bool = False) -> tensorplay._C.TensorBaseApply the Sigmoid Linear Unit (SiLU) function, element-wise.
The SiLU function is also known as the swish function.
.. math
\text{silu}(x) = x * \sigma(x), \text{where } \sigma(x) \text{ is the logistic sigmoid.}INFO
See Gaussian Error Linear Units (GELUs) <https://arxiv.org/abs/1606.08415>_ where the SiLU (Sigmoid Linear Unit) was originally coined, and see Sigmoid-Weighted Linear Units for Neural Network Function Approximation in Reinforcement Learning <https://arxiv.org/abs/1702.03118>_ and Swish: a Self-Gated Activation Function <https://arxiv.org/abs/1710.05941v1>_ where the SiLU was experimented with later.
See ~tensorplay.nn.SiLU for more details.
softmax() [source]
softmax(input, dim=None, dtype=None)threshold() [source]
threshold(input: tensorplay._C.TensorBase, threshold: float, value: float, inplace: bool = False) -> tensorplay._C.TensorBaseApply a threshold to each element of the input Tensor.
See ~tensorplay.nn.Threshold for more details.
