Skip to content

tensorplay

The tensorplay package offers a simple deep-learning framework designed for educational purposes and small-scale experiments. It defines a data structure for multidimensional arrays called Tensor, on which it encapsulates mathematical operations.

It has a CUDA counterpart, that enables you to run your tensor computations on an NVIDIA GPU with compute capability >= 3.0.

Classes

class DType

python
DType(*values)

Bases: Enum

class Device

python
Device(*args, **kwargs)

class DeviceType

python
DeviceType(*values)

Bases: Enum

class DeviceType

python
DeviceType(*values)

Bases: Enum

class Generator

python
Generator(*args, **kwargs)

class Scalar

python
Scalar(*args, **kwargs)

class Scalar

python
Scalar(*args, **kwargs)

class Size

python
Size(*args, **kwargs)

class Size

python
Size(*args, **kwargs)

class Tensor

python
Tensor(*args, **kwargs)
Methods

cpu(self) [source]

Returns a copy of this object in CPU memory. If this object is already in CPU memory, then no copy is performed and the original object is returned.


cuda(self, device=None, non_blocking=False) [source]

Returns a copy of this object in CUDA memory. If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.


double(self) [source]


flatten(self, start_dim=0, end_dim=-1) [source]

Flattens a contiguous range of dims.


float(self) [source]


int(self) [source]


is_float(self) -> bool [source]

Check if tensor is floating point.


long(self) [source]


ndimension(self) -> int [source]

Alias for dim()


t(self) [source]

Returns the transpose of the tensor. Aliased to transpose(0, 1) to ensure correct autograd behavior (TransposeBackward).


type(self, dtype=None, non_blocking=False, **kwargs) [source]

Returns the type if dtype is not provided, else casts this object to the specified type.


unflatten(self, dim, sizes) [source]

Expands a dimension of the input tensor over multiple dimensions.


class device

python
device(*args, **kwargs)

class dtype

python
dtype(*values)

Bases: Enum

class enable_grad [source]

python
enable_grad(orig_func=None)

Bases: _NoParamDecoratorContextManager

Context-manager that enables gradient calculation.

Enables gradient calculation, if it has been disabled via ~no_grad or ~set_grad_enabled.

This context manager is thread local; it will not affect computation in other threads.

Also functions as a decorator.

INFO

enable_grad is one of several mechanisms that can enable or disable gradients locally see locally-disable-grad-doc for more information on how they compare.

INFO

This API does not apply to forward-mode AD <forward-mode-ad>.

Example

python
# xdoctest: +SKIP
x = tensorplay.tensor([1.], requires_grad=True)
with tensorplay.no_grad():
    with tensorplay.enable_grad():
        y = x * 2
y.requires_grad
True
y.backward()
x.grad
tensor([2.])
@tensorplay.enable_grad()
def doubler(x):
    return x * 2
with tensorplay.no_grad():
    z = doubler(x)
z.requires_grad
True
@tensorplay.enable_grad()
def tripler(x):
    return x * 3
with tensorplay.no_grad():
    z = tripler(x)
z.requires_grad
True
Methods

clone(self) [source]


class no_grad [source]

python
no_grad() -> None

Bases: _NoParamDecoratorContextManager

Context-manager that disables gradient calculation.

Disabling gradient calculation is useful for inference, when you are sure that you will not call Tensor.backward(). It will reduce memory consumption for computations that would otherwise have requires_grad=True.

In this mode, the result of every computation will have requires_grad=False, even when the inputs have requires_grad=True. There is an exception! All factory functions, or functions that create a new Tensor and take a requires_grad kwarg, will NOT be affected by this mode.

This context manager is thread local; it will not affect computation in other threads.

Also functions as a decorator.

INFO

No-grad is one of several mechanisms that can enable or disable gradients locally see locally-disable-grad-doc for more information on how they compare.

INFO

This API does not apply to forward-mode AD <forward-mode-ad>. If you want to disable forward AD for a computation, you can unpack your dual tensors.

Example

python
x = tensorplay.tensor([1.], requires_grad=True)
with tensorplay.no_grad():
    y = x * 2
y.requires_grad
False
@tensorplay.no_grad()
def doubler(x):
    return x * 2
z = doubler(x)
z.requires_grad
False
@tensorplay.no_grad()
def tripler(x):
    return x * 3
z = tripler(x)
z.requires_grad
False
# factory function exception
with tensorplay.no_grad():
    a = tensorplay.nn.Parameter(tensorplay.rand(10))
a.requires_grad
True
Methods

__init__(self) -> None [source]

Initialize self. See help(type(self)) for accurate signature.


clone(self) [source]


class set_grad_enabled [source]

python
set_grad_enabled(mode: bool) -> None

Bases: _DecoratorContextManager

Context-manager that sets gradient calculation on or off.

set_grad_enabled will enable or disable grads based on its argument mode. It can be used as a context-manager or as a function.

This context manager is thread local; it will not affect computation in other threads.

Args

  • mode (bool): Flag whether to enable grad (True), or disable (False). This can be used to conditionally enable gradients.

INFO

set_grad_enabled is one of several mechanisms that can enable or disable gradients locally see locally-disable-grad-doc for more information on how they compare.

INFO

This API does not apply to forward-mode AD <forward-mode-ad>.

Example

python
# xdoctest: +SKIP
x = tensorplay.tensor([1.], requires_grad=True)
is_train = False
with tensorplay.set_grad_enabled(is_train):
    y = x * 2
y.requires_grad
False
_ = tensorplay.set_grad_enabled(True)
y = x * 2
y.requires_grad
True
_ = tensorplay.set_grad_enabled(False)
y = x * 2
y.requires_grad
False
Methods

__init__(self, mode: bool) -> None [source]

Initialize self. See help(type(self)) for accurate signature.


clone(self) -> 'set_grad_enabled' [source]

Create a copy of this class


Functions

abs() [source]

python
abs(input)

abs() [source]

python
abs(input)

acos() [source]

python
acos(input)

acos() [source]

python
acos(input)

acosh() [source]

python
acosh(input)

acosh() [source]

python
acosh(input)

adaptive_avg_pool2d() [source]

python
adaptive_avg_pool2d(input, output_size)

adaptive_avg_pool2d() [source]

python
adaptive_avg_pool2d(input, output_size)

adaptive_avg_pool2d_backward() [source]

python
adaptive_avg_pool2d_backward(grad_output, input)

adaptive_avg_pool2d_backward() [source]

python
adaptive_avg_pool2d_backward(grad_output, input)

adaptive_max_pool2d() [source]

python
adaptive_max_pool2d(input, output_size)

adaptive_max_pool2d() [source]

python
adaptive_max_pool2d(input, output_size)

adaptive_max_pool2d_backward() [source]

python
adaptive_max_pool2d_backward(grad_output, input)

adaptive_max_pool2d_backward() [source]

python
adaptive_max_pool2d_backward(grad_output, input)

all() [source]

python
all(input)

all() [source]

python
all(input)

allclose() [source]

python
allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False)

This function checks if all input and other are close to each other. input: Tensor other: Tensor rtol: float atol: float equal_nan: bool (not supported yet)

angle() [source]

python
angle(input)

angle() [source]

python
angle(input)

any() [source]

python
any(input)

any() [source]

python
any(input)

arange() [source]

python
arange(*args, dtype=tensorplay.undefined, device=None, requires_grad=False)

arange() [source]

python
arange(*args, dtype=tensorplay.undefined, device=None, requires_grad=False)

argmax() [source]

python
argmax(input, dim=None, keepdim=False)

argmax() [source]

python
argmax(input, dim=None, keepdim=False)

argmin() [source]

python
argmin(input, dim=None, keepdim=False)

argmin() [source]

python
argmin(input, dim=None, keepdim=False)

as_tensor()

python
as_tensor(*args, **kwargs)

as_tensor(data: object, dtype: tensorplay.DType | None = None, device: tensorplay.Device | None = None) -> object

Converts data into a tensor, sharing data and preserving autograd history if possible.

as_tensor()

python
as_tensor(*args, **kwargs)

as_tensor(data: object, dtype: tensorplay.DType | None = None, device: tensorplay.Device | None = None) -> object

Converts data into a tensor, sharing data and preserving autograd history if possible.

asin() [source]

python
asin(input)

asin() [source]

python
asin(input)

asinh() [source]

python
asinh(input)

asinh() [source]

python
asinh(input)

atan() [source]

python
atan(input)

atan() [source]

python
atan(input)

atan2() [source]

python
atan2(input, other)

atan2() [source]

python
atan2(input, other)

atanh() [source]

python
atanh(input)

atanh() [source]

python
atanh(input)

avg_pool2d() [source]

python
avg_pool2d(input, kernel_size, stride={}, padding={0}, ceil_mode=False, count_include_pad=True, divisor_override=None)

avg_pool2d() [source]

python
avg_pool2d(input, kernel_size, stride={}, padding={0}, ceil_mode=False, count_include_pad=True, divisor_override=None)

avg_pool2d_backward() [source]

python
avg_pool2d_backward(grad_output, input, kernel_size, stride={}, padding={0}, ceil_mode=False, count_include_pad=True, divisor_override=None)

avg_pool2d_backward() [source]

python
avg_pool2d_backward(grad_output, input, kernel_size, stride={}, padding={0}, ceil_mode=False, count_include_pad=True, divisor_override=None)

batch_norm() [source]

python
batch_norm(input, weight, bias, running_mean, running_var, training, momentum, eps)

batch_norm() [source]

python
batch_norm(input, weight, bias, running_mean, running_var, training, momentum, eps)

batch_norm_backward() [source]

python
batch_norm_backward(grad_output, input, weight=None, running_mean=None, running_var=None, training=True, eps=1e-05)

batch_norm_backward() [source]

python
batch_norm_backward(grad_output, input, weight=None, running_mean=None, running_var=None, training=True, eps=1e-05)

bernoulli() [source]

python
bernoulli(input)

bernoulli() [source]

python
bernoulli(input)

cat() [source]

python
cat(tensors, dim=0)

cat() [source]

python
cat(tensors, dim=0)

ceil() [source]

python
ceil(input)

ceil() [source]

python
ceil(input)

chunk() [source]

python
chunk(input, chunks, dim=0)

chunk() [source]

python
chunk(input, chunks, dim=0)

clamp() [source]

python
clamp(input, min=None, max=None)

clamp() [source]

python
clamp(input, min=None, max=None)

clamp_backward() [source]

python
clamp_backward(grad_output, input, min=None, max=None)

clamp_backward() [source]

python
clamp_backward(grad_output, input, min=None, max=None)

constant_pad_nd() [source]

python
constant_pad_nd(input, pad, value)

constant_pad_nd() [source]

python
constant_pad_nd(input, pad, value)

constant_pad_nd_backward() [source]

python
constant_pad_nd_backward(grad_output, pad)

constant_pad_nd_backward() [source]

python
constant_pad_nd_backward(grad_output, pad)

conv1d() [source]

python
conv1d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)

conv1d() [source]

python
conv1d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)

conv1d_grad_bias() [source]

python
conv1d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)

conv1d_grad_bias() [source]

python
conv1d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)

conv1d_grad_input() [source]

python
conv1d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)

conv1d_grad_input() [source]

python
conv1d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)

conv1d_grad_weight() [source]

python
conv1d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)

conv1d_grad_weight() [source]

python
conv1d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)

conv2d() [source]

python
conv2d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)

conv2d() [source]

python
conv2d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)

conv2d_grad_bias() [source]

python
conv2d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)

conv2d_grad_bias() [source]

python
conv2d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)

conv2d_grad_input() [source]

python
conv2d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)

conv2d_grad_input() [source]

python
conv2d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)

conv2d_grad_weight() [source]

python
conv2d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)

conv2d_grad_weight() [source]

python
conv2d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)

conv3d() [source]

python
conv3d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)

conv3d() [source]

python
conv3d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)

conv3d_grad_bias() [source]

python
conv3d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)

conv3d_grad_bias() [source]

python
conv3d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)

conv3d_grad_input() [source]

python
conv3d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)

conv3d_grad_input() [source]

python
conv3d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)

conv3d_grad_weight() [source]

python
conv3d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)

conv3d_grad_weight() [source]

python
conv3d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)

conv_transpose2d() [source]

python
conv_transpose2d(input, weight, bias={}, stride={1}, padding={0}, output_padding={0}, groups=1, dilation={1})

conv_transpose2d() [source]

python
conv_transpose2d(input, weight, bias={}, stride={1}, padding={0}, output_padding={0}, groups=1, dilation={1})

conv_transpose2d_grad_bias() [source]

python
conv_transpose2d_grad_bias(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose2d_grad_bias() [source]

python
conv_transpose2d_grad_bias(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose2d_grad_input() [source]

python
conv_transpose2d_grad_input(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose2d_grad_input() [source]

python
conv_transpose2d_grad_input(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose2d_grad_weight() [source]

python
conv_transpose2d_grad_weight(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose2d_grad_weight() [source]

python
conv_transpose2d_grad_weight(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose3d() [source]

python
conv_transpose3d(input, weight, bias={}, stride={1}, padding={0}, output_padding={0}, groups=1, dilation={1})

conv_transpose3d() [source]

python
conv_transpose3d(input, weight, bias={}, stride={1}, padding={0}, output_padding={0}, groups=1, dilation={1})

conv_transpose3d_grad_bias() [source]

python
conv_transpose3d_grad_bias(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose3d_grad_bias() [source]

python
conv_transpose3d_grad_bias(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose3d_grad_input() [source]

python
conv_transpose3d_grad_input(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose3d_grad_input() [source]

python
conv_transpose3d_grad_input(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose3d_grad_weight() [source]

python
conv_transpose3d_grad_weight(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose3d_grad_weight() [source]

python
conv_transpose3d_grad_weight(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

cos() [source]

python
cos(input)

cos() [source]

python
cos(input)

cosh() [source]

python
cosh(input)

cosh() [source]

python
cosh(input)

default_generator()

python
default_generator(*args, **kwargs)

default_generator() -> tensorplay.Generator

embedding() [source]

python
embedding(weight, indices, padding_idx=-1, scale_grad_by_freq=False, sparse=False)

embedding() [source]

python
embedding(weight, indices, padding_idx=-1, scale_grad_by_freq=False, sparse=False)

embedding_dense_backward() [source]

python
embedding_dense_backward(grad_output, indices, num_weights, padding_idx, scale_grad_by_freq)

embedding_dense_backward() [source]

python
embedding_dense_backward(grad_output, indices, num_weights, padding_idx, scale_grad_by_freq)

empty() [source]

python
empty(*size, dtype=tensorplay.float32, device=None, requires_grad=False)

empty() [source]

python
empty(*size, dtype=tensorplay.float32, device=None, requires_grad=False)

empty_like() [source]

python
empty_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

empty_like() [source]

python
empty_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

eq() [source]

python
eq(input, other)

eq() [source]

python
eq(input, other)

exp() [source]

python
exp(input)

exp() [source]

python
exp(input)

eye() [source]

python
eye(n, m=-1, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

eye() [source]

python
eye(n, m=-1, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

floor() [source]

python
floor(input)

floor() [source]

python
floor(input)

from_dlpack()

python
from_dlpack(*args, **kwargs)

from_dlpack(obj: object) -> tensorplay._C.TensorBase

from_dlpack()

python
from_dlpack(*args, **kwargs)

from_dlpack(obj: object) -> tensorplay._C.TensorBase

full() [source]

python
full(size, fill_value, dtype=tensorplay.undefined, device=Ellipsis, requires_grad=False)

full() [source]

python
full(size, fill_value, dtype=tensorplay.undefined, device=Ellipsis, requires_grad=False)

full_like() [source]

python
full_like(input, fill_value, dtype=tensorplay.undefined, device=None, requires_grad=False)

full_like() [source]

python
full_like(input, fill_value, dtype=tensorplay.undefined, device=None, requires_grad=False)

ge() [source]

python
ge(input, other)

ge() [source]

python
ge(input, other)

gelu() [source]

python
gelu(input)

gelu() [source]

python
gelu(input)

group_norm() [source]

python
group_norm(input, num_groups, weight=None, bias=None, eps=1e-05)

group_norm() [source]

python
group_norm(input, num_groups, weight=None, bias=None, eps=1e-05)

group_norm_backward() [source]

python
group_norm_backward(grad_output, input, num_groups, weight=None, bias=None, eps=1e-05)

group_norm_backward() [source]

python
group_norm_backward(grad_output, input, num_groups, weight=None, bias=None, eps=1e-05)

gt() [source]

python
gt(input, other)

gt() [source]

python
gt(input, other)

initial_seed()

python
initial_seed(*args, **kwargs)

initial_seed() -> int

instance_norm() [source]

python
instance_norm(input, weight=None, bias=None, running_mean=None, running_var=None, use_input_stats=True, momentum=0.1, eps=1e-05)

instance_norm() [source]

python
instance_norm(input, weight=None, bias=None, running_mean=None, running_var=None, use_input_stats=True, momentum=0.1, eps=1e-05)

instance_norm_backward() [source]

python
instance_norm_backward(grad_output, input, weight=None, bias=None, running_mean=None, running_var=None, use_input_stats=True, eps=1e-05)

instance_norm_backward() [source]

python
instance_norm_backward(grad_output, input, weight=None, bias=None, running_mean=None, running_var=None, use_input_stats=True, eps=1e-05)

is_grad_enabled() [source]

python
is_grad_enabled()

layer_norm() [source]

python
layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05)

layer_norm() [source]

python
layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05)

layer_norm_backward() [source]

python
layer_norm_backward(grad_output, input, normalized_shape, weight=None, bias=None, eps=1e-05)

layer_norm_backward() [source]

python
layer_norm_backward(grad_output, input, normalized_shape, weight=None, bias=None, eps=1e-05)

le() [source]

python
le(input, other)

le() [source]

python
le(input, other)

lerp() [source]

python
lerp(input, end, weight)

lerp() [source]

python
lerp(input, end, weight)

linspace() [source]

python
linspace(start, end, steps, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

linspace() [source]

python
linspace(start, end, steps, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

load() [source]

python
load(f, map_location=None, pickle_module=<module 'pickle' from 'E:\\Miniconda\\Lib\\pickle.py'>, **pickle_load_args)

Loads an object saved with tensorplay.save() from a file. Supports .tpm, .safetensors, and .pth (via torch).

Args

  • f: a file-like object (has to implement read, readline, tell, and seek), or a string containing a file name.
  • map_location: a function, torch.device, string or a dict specifying how to remap storage locations
  • pickle_module: module used for unpickling metadata and objects
  • pickle_load_args: (optional) keyword arguments passed to pickle_module.load

log() [source]

python
log(input)

log() [source]

python
log(input)

log_softmax() [source]

python
log_softmax(input, dim, dtype=tensorplay.undefined)

log_softmax() [source]

python
log_softmax(input, dim, dtype=tensorplay.undefined)

logspace() [source]

python
logspace(start, end, steps, base=10.0, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

logspace() [source]

python
logspace(start, end, steps, base=10.0, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

lt() [source]

python
lt(input, other)

lt() [source]

python
lt(input, other)

manual_seed()

python
manual_seed(*args, **kwargs)

manual_seed(seed: int) -> None

masked_select() [source]

python
masked_select(input, mask)

masked_select() [source]

python
masked_select(input, mask)

matmul() [source]

python
matmul(input, other)

matmul() [source]

python
matmul(input, other)

max() [source]

python
max(input)

max() [source]

python
max(input)

max_pool2d() [source]

python
max_pool2d(input, kernel_size, stride={}, padding={0}, dilation={1}, ceil_mode=False)

max_pool2d() [source]

python
max_pool2d(input, kernel_size, stride={}, padding={0}, dilation={1}, ceil_mode=False)

max_pool2d_backward() [source]

python
max_pool2d_backward(grad_output, input, kernel_size, stride={}, padding={0}, dilation={1}, ceil_mode=False)

max_pool2d_backward() [source]

python
max_pool2d_backward(grad_output, input, kernel_size, stride={}, padding={0}, dilation={1}, ceil_mode=False)

mean() [source]

python
mean(input, dtype=tensorplay.undefined)

mean() [source]

python
mean(input, dtype=tensorplay.undefined)

median() [source]

python
median(input)

median() [source]

python
median(input)

min() [source]

python
min(input)

min() [source]

python
min(input)

mm() [source]

python
mm(input, other)

mm() [source]

python
mm(input, other)

mse_loss() [source]

python
mse_loss(input, target, reduction=1)

mse_loss() [source]

python
mse_loss(input, target, reduction=1)

mse_loss_backward() [source]

python
mse_loss_backward(grad_output, input, target, reduction=1)

mse_loss_backward() [source]

python
mse_loss_backward(grad_output, input, target, reduction=1)

ne() [source]

python
ne(input, other)

ne() [source]

python
ne(input, other)

neg() [source]

python
neg(input)

neg() [source]

python
neg(input)

nll_loss() [source]

python
nll_loss(input, target, weight=None, reduction=1, ignore_index=-100)

nll_loss() [source]

python
nll_loss(input, target, weight=None, reduction=1, ignore_index=-100)

nll_loss_backward() [source]

python
nll_loss_backward(grad_output, input, target, weight=None, reduction=1, ignore_index=-100, total_weight={})

nll_loss_backward() [source]

python
nll_loss_backward(grad_output, input, target, weight=None, reduction=1, ignore_index=-100, total_weight={})

norm() [source]

python
norm(input, p=2.0)

norm() [source]

python
norm(input, p=2.0)

normal() [source]

python
normal(mean, std)

normal() [source]

python
normal(mean, std)

ones() [source]

python
ones(*size, dtype=tensorplay.float32, device=None, requires_grad=False)

ones() [source]

python
ones(*size, dtype=tensorplay.float32, device=None, requires_grad=False)

ones_like() [source]

python
ones_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

ones_like() [source]

python
ones_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

permute() [source]

python
permute(input, dims)

permute() [source]

python
permute(input, dims)

permute_backward() [source]

python
permute_backward(grad_output, input, dims)

permute_backward() [source]

python
permute_backward(grad_output, input, dims)

poisson() [source]

python
poisson(input)

poisson() [source]

python
poisson(input)

pow() [source]

python
pow(input, exponent)

pow() [source]

python
pow(input, exponent)

prod() [source]

python
prod(input, dtype=tensorplay.undefined)

prod() [source]

python
prod(input, dtype=tensorplay.undefined)

rand() [source]

python
rand(*size, dtype=tensorplay.float32, device=None, requires_grad=False)

rand() [source]

python
rand(*size, dtype=tensorplay.float32, device=None, requires_grad=False)

rand_like() [source]

python
rand_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

rand_like() [source]

python
rand_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

randint() [source]

python
randint(low, high, size, dtype=tensorplay.int64, device=Ellipsis, requires_grad=False)

randint() [source]

python
randint(low, high, size, dtype=tensorplay.int64, device=Ellipsis, requires_grad=False)

randint_like() [source]

python
randint_like(input, low, high, dtype=tensorplay.undefined, device=None, requires_grad=False)

randint_like() [source]

python
randint_like(input, low, high, dtype=tensorplay.undefined, device=None, requires_grad=False)

randn() [source]

python
randn(*size, dtype=tensorplay.float32, device=None, requires_grad=False)

randn() [source]

python
randn(*size, dtype=tensorplay.float32, device=None, requires_grad=False)

randn_like() [source]

python
randn_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

randn_like() [source]

python
randn_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

randperm() [source]

python
randperm(n, dtype=tensorplay.int64, device=Ellipsis, requires_grad=False)

randperm() [source]

python
randperm(n, dtype=tensorplay.int64, device=Ellipsis, requires_grad=False)

relu() [source]

python
relu(input)

relu() [source]

python
relu(input)

relu_() [source]

python
relu_(input)

reshape() [source]

python
reshape(input, shape)

reshape() [source]

python
reshape(input, shape)

round() [source]

python
round(input)

round() [source]

python
round(input)

rsqrt() [source]

python
rsqrt(input)

rsqrt() [source]

python
rsqrt(input)

save() [source]

python
save(obj, f, pickle_module=<module 'pickle' from 'E:\\Miniconda\\Lib\\pickle.py'>, pickle_protocol=2, _use_new_zipfile_serialization=True)

Saves an object to a disk file. Supports .tpm (TensorPlay Model) and .safetensors (Safetensors).

seed()

python
seed(*args, **kwargs)

seed(seed: int) -> None

set_printoptions()

python
set_printoptions(*args, **kwargs)

set_printoptions(edge_items: int = -1, threshold: int = -1, precision: int = -1, linewidth: int = -1) -> None

Set print options

sigmoid() [source]

python
sigmoid(input)

sigmoid() [source]

python
sigmoid(input)

sign() [source]

python
sign(input)

sign() [source]

python
sign(input)

silu() [source]

python
silu(input)

silu() [source]

python
silu(input)

sin() [source]

python
sin(input)

sin() [source]

python
sin(input)

sinh() [source]

python
sinh(input)

sinh() [source]

python
sinh(input)

softmax() [source]

python
softmax(input, dim, dtype=tensorplay.undefined)

softmax() [source]

python
softmax(input, dim, dtype=tensorplay.undefined)

split() [source]

python
split(input, split_size, dim=0)

split() [source]

python
split(input, split_size, dim=0)

sqrt() [source]

python
sqrt(input)

sqrt() [source]

python
sqrt(input)

square() [source]

python
square(input)

square() [source]

python
square(input)

squeeze() [source]

python
squeeze(input)

squeeze() [source]

python
squeeze(input)

squeeze_backward() [source]

python
squeeze_backward(grad_output, input)

squeeze_backward() [source]

python
squeeze_backward(grad_output, input)

stack() [source]

python
stack(tensors, dim=0)

stack() [source]

python
stack(tensors, dim=0)

std() [source]

python
std(input, correction=1)

std() [source]

python
std(input, correction=1)

sum() [source]

python
sum(input, dtype=tensorplay.undefined)

sum() [source]

python
sum(input, dtype=tensorplay.undefined)

t() [source]

python
t(input)

t() [source]

python
t(input)

tan() [source]

python
tan(input)

tan() [source]

python
tan(input)

tanh() [source]

python
tanh(input)

tanh() [source]

python
tanh(input)

tensor()

python
tensor(*args, **kwargs)

tensor(data: object, *, dtype: tensorplay.DType | None = None, device: tensorplay.Device | None = None, requires_grad: bool = False) -> tensorplay._C.TensorBase

tensor(data, *, dtype: Optional[DType] = None, device: Optional[Device] = None, requires_grad: bool = False) -> Tensor

tensor()

python
tensor(*args, **kwargs)

tensor(data: object, *, dtype: tensorplay.DType | None = None, device: tensorplay.Device | None = None, requires_grad: bool = False) -> tensorplay._C.TensorBase

tensor(data, *, dtype: Optional[DType] = None, device: Optional[Device] = None, requires_grad: bool = False) -> Tensor

threshold_backward() [source]

python
threshold_backward(grad_output, output, threshold)

threshold_backward() [source]

python
threshold_backward(grad_output, output, threshold)

to_dlpack()

python
to_dlpack(*args, **kwargs)

to_dlpack(obj: object, stream: int | None = None) -> types.CapsuleType

transpose() [source]

python
transpose(input, dim0, dim1)

transpose() [source]

python
transpose(input, dim0, dim1)

unbind() [source]

python
unbind(input, dim=0)

unbind() [source]

python
unbind(input, dim=0)

unsqueeze() [source]

python
unsqueeze(input, dim)

unsqueeze() [source]

python
unsqueeze(input, dim)

var() [source]

python
var(input, correction=1)

var() [source]

python
var(input, correction=1)

zeros() [source]

python
zeros(*size, dtype=tensorplay.float32, device=None, requires_grad=False)

zeros() [source]

python
zeros(*size, dtype=tensorplay.float32, device=None, requires_grad=False)

zeros_like() [source]

python
zeros_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

zeros_like() [source]

python
zeros_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

Released under the Apache 2.0 License.

📚DeepWiki