tensorplay
The tensorplay package offers a simple deep-learning framework designed for educational purposes and small-scale experiments. It defines a data structure for multidimensional arrays called Tensor, on which it encapsulates mathematical operations.
It has a CUDA counterpart, that enables you to run your tensor computations on an NVIDIA GPU with compute capability >= 3.0.
Classes
class DType
DType(*values)Bases: Enum
class Device
Device(*args, **kwargs)class DeviceType
DeviceType(*values)Bases: Enum
class DeviceType
DeviceType(*values)Bases: Enum
class Generator
Generator(*args, **kwargs)class Scalar
Scalar(*args, **kwargs)class Scalar
Scalar(*args, **kwargs)class Size
Size(*args, **kwargs)class Size
Size(*args, **kwargs)class Tensor
Tensor(*args, **kwargs)Methods
cpu(self) [source]
Returns a copy of this object in CPU memory. If this object is already in CPU memory, then no copy is performed and the original object is returned.
cuda(self, device=None, non_blocking=False) [source]
Returns a copy of this object in CUDA memory. If this object is already in CUDA memory and on the correct device, then no copy is performed and the original object is returned.
double(self) [source]
flatten(self, start_dim=0, end_dim=-1) [source]
Flattens a contiguous range of dims.
float(self) [source]
int(self) [source]
is_float(self) -> bool [source]
Check if tensor is floating point.
long(self) [source]
ndimension(self) -> int [source]
Alias for dim()
t(self) [source]
Returns the transpose of the tensor. Aliased to transpose(0, 1) to ensure correct autograd behavior (TransposeBackward).
type(self, dtype=None, non_blocking=False, **kwargs) [source]
Returns the type if dtype is not provided, else casts this object to the specified type.
unflatten(self, dim, sizes) [source]
Expands a dimension of the input tensor over multiple dimensions.
class device
device(*args, **kwargs)class dtype
dtype(*values)Bases: Enum
class enable_grad [source]
enable_grad(orig_func=None)Bases: _NoParamDecoratorContextManager
Context-manager that enables gradient calculation.
Enables gradient calculation, if it has been disabled via ~no_grad or ~set_grad_enabled.
This context manager is thread local; it will not affect computation in other threads.
Also functions as a decorator.
INFO
enable_grad is one of several mechanisms that can enable or disable gradients locally see locally-disable-grad-doc for more information on how they compare.
INFO
This API does not apply to forward-mode AD <forward-mode-ad>.
Example
# xdoctest: +SKIP
x = tensorplay.tensor([1.], requires_grad=True)
with tensorplay.no_grad():
with tensorplay.enable_grad():
y = x * 2
y.requires_grad
True
y.backward()
x.grad
tensor([2.])
@tensorplay.enable_grad()
def doubler(x):
return x * 2
with tensorplay.no_grad():
z = doubler(x)
z.requires_grad
True
@tensorplay.enable_grad()
def tripler(x):
return x * 3
with tensorplay.no_grad():
z = tripler(x)
z.requires_grad
Trueclass no_grad [source]
no_grad() -> NoneBases: _NoParamDecoratorContextManager
Context-manager that disables gradient calculation.
Disabling gradient calculation is useful for inference, when you are sure that you will not call Tensor.backward(). It will reduce memory consumption for computations that would otherwise have requires_grad=True.
In this mode, the result of every computation will have requires_grad=False, even when the inputs have requires_grad=True. There is an exception! All factory functions, or functions that create a new Tensor and take a requires_grad kwarg, will NOT be affected by this mode.
This context manager is thread local; it will not affect computation in other threads.
Also functions as a decorator.
INFO
No-grad is one of several mechanisms that can enable or disable gradients locally see locally-disable-grad-doc for more information on how they compare.
INFO
This API does not apply to forward-mode AD <forward-mode-ad>. If you want to disable forward AD for a computation, you can unpack your dual tensors.
Example
x = tensorplay.tensor([1.], requires_grad=True)
with tensorplay.no_grad():
y = x * 2
y.requires_grad
False
@tensorplay.no_grad()
def doubler(x):
return x * 2
z = doubler(x)
z.requires_grad
False
@tensorplay.no_grad()
def tripler(x):
return x * 3
z = tripler(x)
z.requires_grad
False
# factory function exception
with tensorplay.no_grad():
a = tensorplay.nn.Parameter(tensorplay.rand(10))
a.requires_grad
TrueMethods
__init__(self) -> None [source]
Initialize self. See help(type(self)) for accurate signature.
clone(self) [source]
class set_grad_enabled [source]
set_grad_enabled(mode: bool) -> NoneBases: _DecoratorContextManager
Context-manager that sets gradient calculation on or off.
set_grad_enabled will enable or disable grads based on its argument mode. It can be used as a context-manager or as a function.
This context manager is thread local; it will not affect computation in other threads.
Args
- mode (
bool): Flag whether to enable grad (True), or disable (False). This can be used to conditionally enable gradients.
INFO
set_grad_enabled is one of several mechanisms that can enable or disable gradients locally see locally-disable-grad-doc for more information on how they compare.
INFO
This API does not apply to forward-mode AD <forward-mode-ad>.
Example
# xdoctest: +SKIP
x = tensorplay.tensor([1.], requires_grad=True)
is_train = False
with tensorplay.set_grad_enabled(is_train):
y = x * 2
y.requires_grad
False
_ = tensorplay.set_grad_enabled(True)
y = x * 2
y.requires_grad
True
_ = tensorplay.set_grad_enabled(False)
y = x * 2
y.requires_grad
FalseMethods
__init__(self, mode: bool) -> None [source]
Initialize self. See help(type(self)) for accurate signature.
clone(self) -> 'set_grad_enabled' [source]
Create a copy of this class
Functions
abs() [source]
abs(input)abs() [source]
abs(input)acos() [source]
acos(input)acos() [source]
acos(input)acosh() [source]
acosh(input)acosh() [source]
acosh(input)adaptive_avg_pool2d() [source]
adaptive_avg_pool2d(input, output_size)adaptive_avg_pool2d() [source]
adaptive_avg_pool2d(input, output_size)adaptive_avg_pool2d_backward() [source]
adaptive_avg_pool2d_backward(grad_output, input)adaptive_avg_pool2d_backward() [source]
adaptive_avg_pool2d_backward(grad_output, input)adaptive_max_pool2d() [source]
adaptive_max_pool2d(input, output_size)adaptive_max_pool2d() [source]
adaptive_max_pool2d(input, output_size)adaptive_max_pool2d_backward() [source]
adaptive_max_pool2d_backward(grad_output, input)adaptive_max_pool2d_backward() [source]
adaptive_max_pool2d_backward(grad_output, input)all() [source]
all(input)all() [source]
all(input)allclose() [source]
allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False)This function checks if all input and other are close to each other. input: Tensor other: Tensor rtol: float atol: float equal_nan: bool (not supported yet)
angle() [source]
angle(input)angle() [source]
angle(input)any() [source]
any(input)any() [source]
any(input)arange() [source]
arange(*args, dtype=tensorplay.undefined, device=None, requires_grad=False)arange() [source]
arange(*args, dtype=tensorplay.undefined, device=None, requires_grad=False)argmax() [source]
argmax(input, dim=None, keepdim=False)argmax() [source]
argmax(input, dim=None, keepdim=False)argmin() [source]
argmin(input, dim=None, keepdim=False)argmin() [source]
argmin(input, dim=None, keepdim=False)as_tensor()
as_tensor(*args, **kwargs)as_tensor(data: object, dtype: tensorplay.DType | None = None, device: tensorplay.Device | None = None) -> object
Converts data into a tensor, sharing data and preserving autograd history if possible.
as_tensor()
as_tensor(*args, **kwargs)as_tensor(data: object, dtype: tensorplay.DType | None = None, device: tensorplay.Device | None = None) -> object
Converts data into a tensor, sharing data and preserving autograd history if possible.
asin() [source]
asin(input)asin() [source]
asin(input)asinh() [source]
asinh(input)asinh() [source]
asinh(input)atan() [source]
atan(input)atan() [source]
atan(input)atan2() [source]
atan2(input, other)atan2() [source]
atan2(input, other)atanh() [source]
atanh(input)atanh() [source]
atanh(input)avg_pool2d() [source]
avg_pool2d(input, kernel_size, stride={}, padding={0}, ceil_mode=False, count_include_pad=True, divisor_override=None)avg_pool2d() [source]
avg_pool2d(input, kernel_size, stride={}, padding={0}, ceil_mode=False, count_include_pad=True, divisor_override=None)avg_pool2d_backward() [source]
avg_pool2d_backward(grad_output, input, kernel_size, stride={}, padding={0}, ceil_mode=False, count_include_pad=True, divisor_override=None)avg_pool2d_backward() [source]
avg_pool2d_backward(grad_output, input, kernel_size, stride={}, padding={0}, ceil_mode=False, count_include_pad=True, divisor_override=None)batch_norm() [source]
batch_norm(input, weight, bias, running_mean, running_var, training, momentum, eps)batch_norm() [source]
batch_norm(input, weight, bias, running_mean, running_var, training, momentum, eps)batch_norm_backward() [source]
batch_norm_backward(grad_output, input, weight=None, running_mean=None, running_var=None, training=True, eps=1e-05)batch_norm_backward() [source]
batch_norm_backward(grad_output, input, weight=None, running_mean=None, running_var=None, training=True, eps=1e-05)bernoulli() [source]
bernoulli(input)bernoulli() [source]
bernoulli(input)cat() [source]
cat(tensors, dim=0)cat() [source]
cat(tensors, dim=0)ceil() [source]
ceil(input)ceil() [source]
ceil(input)chunk() [source]
chunk(input, chunks, dim=0)chunk() [source]
chunk(input, chunks, dim=0)clamp() [source]
clamp(input, min=None, max=None)clamp() [source]
clamp(input, min=None, max=None)clamp_backward() [source]
clamp_backward(grad_output, input, min=None, max=None)clamp_backward() [source]
clamp_backward(grad_output, input, min=None, max=None)constant_pad_nd() [source]
constant_pad_nd(input, pad, value)constant_pad_nd() [source]
constant_pad_nd(input, pad, value)constant_pad_nd_backward() [source]
constant_pad_nd_backward(grad_output, pad)constant_pad_nd_backward() [source]
constant_pad_nd_backward(grad_output, pad)conv1d() [source]
conv1d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)conv1d() [source]
conv1d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)conv1d_grad_bias() [source]
conv1d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)conv1d_grad_bias() [source]
conv1d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)conv1d_grad_input() [source]
conv1d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)conv1d_grad_input() [source]
conv1d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)conv1d_grad_weight() [source]
conv1d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)conv1d_grad_weight() [source]
conv1d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)conv2d() [source]
conv2d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)conv2d() [source]
conv2d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)conv2d_grad_bias() [source]
conv2d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)conv2d_grad_bias() [source]
conv2d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)conv2d_grad_input() [source]
conv2d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)conv2d_grad_input() [source]
conv2d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)conv2d_grad_weight() [source]
conv2d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)conv2d_grad_weight() [source]
conv2d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)conv3d() [source]
conv3d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)conv3d() [source]
conv3d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)conv3d_grad_bias() [source]
conv3d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)conv3d_grad_bias() [source]
conv3d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)conv3d_grad_input() [source]
conv3d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)conv3d_grad_input() [source]
conv3d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)conv3d_grad_weight() [source]
conv3d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)conv3d_grad_weight() [source]
conv3d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)conv_transpose2d() [source]
conv_transpose2d(input, weight, bias={}, stride={1}, padding={0}, output_padding={0}, groups=1, dilation={1})conv_transpose2d() [source]
conv_transpose2d(input, weight, bias={}, stride={1}, padding={0}, output_padding={0}, groups=1, dilation={1})conv_transpose2d_grad_bias() [source]
conv_transpose2d_grad_bias(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose2d_grad_bias() [source]
conv_transpose2d_grad_bias(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose2d_grad_input() [source]
conv_transpose2d_grad_input(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose2d_grad_input() [source]
conv_transpose2d_grad_input(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose2d_grad_weight() [source]
conv_transpose2d_grad_weight(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose2d_grad_weight() [source]
conv_transpose2d_grad_weight(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose3d() [source]
conv_transpose3d(input, weight, bias={}, stride={1}, padding={0}, output_padding={0}, groups=1, dilation={1})conv_transpose3d() [source]
conv_transpose3d(input, weight, bias={}, stride={1}, padding={0}, output_padding={0}, groups=1, dilation={1})conv_transpose3d_grad_bias() [source]
conv_transpose3d_grad_bias(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose3d_grad_bias() [source]
conv_transpose3d_grad_bias(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose3d_grad_input() [source]
conv_transpose3d_grad_input(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose3d_grad_input() [source]
conv_transpose3d_grad_input(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose3d_grad_weight() [source]
conv_transpose3d_grad_weight(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose3d_grad_weight() [source]
conv_transpose3d_grad_weight(grad_output, input, weight, stride, padding, output_padding, groups, dilation)cos() [source]
cos(input)cos() [source]
cos(input)cosh() [source]
cosh(input)cosh() [source]
cosh(input)default_generator()
default_generator(*args, **kwargs)default_generator() -> tensorplay.Generator
embedding() [source]
embedding(weight, indices, padding_idx=-1, scale_grad_by_freq=False, sparse=False)embedding() [source]
embedding(weight, indices, padding_idx=-1, scale_grad_by_freq=False, sparse=False)embedding_dense_backward() [source]
embedding_dense_backward(grad_output, indices, num_weights, padding_idx, scale_grad_by_freq)embedding_dense_backward() [source]
embedding_dense_backward(grad_output, indices, num_weights, padding_idx, scale_grad_by_freq)empty() [source]
empty(*size, dtype=tensorplay.float32, device=None, requires_grad=False)empty() [source]
empty(*size, dtype=tensorplay.float32, device=None, requires_grad=False)empty_like() [source]
empty_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)empty_like() [source]
empty_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)eq() [source]
eq(input, other)eq() [source]
eq(input, other)exp() [source]
exp(input)exp() [source]
exp(input)eye() [source]
eye(n, m=-1, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)eye() [source]
eye(n, m=-1, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)floor() [source]
floor(input)floor() [source]
floor(input)from_dlpack()
from_dlpack(*args, **kwargs)from_dlpack(obj: object) -> tensorplay._C.TensorBase
from_dlpack()
from_dlpack(*args, **kwargs)from_dlpack(obj: object) -> tensorplay._C.TensorBase
full() [source]
full(size, fill_value, dtype=tensorplay.undefined, device=Ellipsis, requires_grad=False)full() [source]
full(size, fill_value, dtype=tensorplay.undefined, device=Ellipsis, requires_grad=False)full_like() [source]
full_like(input, fill_value, dtype=tensorplay.undefined, device=None, requires_grad=False)full_like() [source]
full_like(input, fill_value, dtype=tensorplay.undefined, device=None, requires_grad=False)ge() [source]
ge(input, other)ge() [source]
ge(input, other)gelu() [source]
gelu(input)gelu() [source]
gelu(input)group_norm() [source]
group_norm(input, num_groups, weight=None, bias=None, eps=1e-05)group_norm() [source]
group_norm(input, num_groups, weight=None, bias=None, eps=1e-05)group_norm_backward() [source]
group_norm_backward(grad_output, input, num_groups, weight=None, bias=None, eps=1e-05)group_norm_backward() [source]
group_norm_backward(grad_output, input, num_groups, weight=None, bias=None, eps=1e-05)gt() [source]
gt(input, other)gt() [source]
gt(input, other)initial_seed()
initial_seed(*args, **kwargs)initial_seed() -> int
instance_norm() [source]
instance_norm(input, weight=None, bias=None, running_mean=None, running_var=None, use_input_stats=True, momentum=0.1, eps=1e-05)instance_norm() [source]
instance_norm(input, weight=None, bias=None, running_mean=None, running_var=None, use_input_stats=True, momentum=0.1, eps=1e-05)instance_norm_backward() [source]
instance_norm_backward(grad_output, input, weight=None, bias=None, running_mean=None, running_var=None, use_input_stats=True, eps=1e-05)instance_norm_backward() [source]
instance_norm_backward(grad_output, input, weight=None, bias=None, running_mean=None, running_var=None, use_input_stats=True, eps=1e-05)is_grad_enabled() [source]
is_grad_enabled()layer_norm() [source]
layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05)layer_norm() [source]
layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05)layer_norm_backward() [source]
layer_norm_backward(grad_output, input, normalized_shape, weight=None, bias=None, eps=1e-05)layer_norm_backward() [source]
layer_norm_backward(grad_output, input, normalized_shape, weight=None, bias=None, eps=1e-05)le() [source]
le(input, other)le() [source]
le(input, other)lerp() [source]
lerp(input, end, weight)lerp() [source]
lerp(input, end, weight)linspace() [source]
linspace(start, end, steps, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)linspace() [source]
linspace(start, end, steps, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)load() [source]
load(f, map_location=None, pickle_module=<module 'pickle' from 'E:\\Miniconda\\Lib\\pickle.py'>, **pickle_load_args)Loads an object saved with tensorplay.save() from a file. Supports .tpm, .safetensors, and .pth (via torch).
Args
- f: a file-like object (has to implement read, readline, tell, and seek), or a string containing a file name.
- map_location: a function, torch.device, string or a dict specifying how to remap storage locations
- pickle_module: module used for unpickling metadata and objects
- pickle_load_args: (optional) keyword arguments passed to pickle_module.load
log() [source]
log(input)log() [source]
log(input)log_softmax() [source]
log_softmax(input, dim, dtype=tensorplay.undefined)log_softmax() [source]
log_softmax(input, dim, dtype=tensorplay.undefined)logspace() [source]
logspace(start, end, steps, base=10.0, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)logspace() [source]
logspace(start, end, steps, base=10.0, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)lt() [source]
lt(input, other)lt() [source]
lt(input, other)manual_seed()
manual_seed(*args, **kwargs)manual_seed(seed: int) -> None
masked_select() [source]
masked_select(input, mask)masked_select() [source]
masked_select(input, mask)matmul() [source]
matmul(input, other)matmul() [source]
matmul(input, other)max() [source]
max(input)max() [source]
max(input)max_pool2d() [source]
max_pool2d(input, kernel_size, stride={}, padding={0}, dilation={1}, ceil_mode=False)max_pool2d() [source]
max_pool2d(input, kernel_size, stride={}, padding={0}, dilation={1}, ceil_mode=False)max_pool2d_backward() [source]
max_pool2d_backward(grad_output, input, kernel_size, stride={}, padding={0}, dilation={1}, ceil_mode=False)max_pool2d_backward() [source]
max_pool2d_backward(grad_output, input, kernel_size, stride={}, padding={0}, dilation={1}, ceil_mode=False)mean() [source]
mean(input, dtype=tensorplay.undefined)mean() [source]
mean(input, dtype=tensorplay.undefined)median() [source]
median(input)median() [source]
median(input)min() [source]
min(input)min() [source]
min(input)mm() [source]
mm(input, other)mm() [source]
mm(input, other)mse_loss() [source]
mse_loss(input, target, reduction=1)mse_loss() [source]
mse_loss(input, target, reduction=1)mse_loss_backward() [source]
mse_loss_backward(grad_output, input, target, reduction=1)mse_loss_backward() [source]
mse_loss_backward(grad_output, input, target, reduction=1)ne() [source]
ne(input, other)ne() [source]
ne(input, other)neg() [source]
neg(input)neg() [source]
neg(input)nll_loss() [source]
nll_loss(input, target, weight=None, reduction=1, ignore_index=-100)nll_loss() [source]
nll_loss(input, target, weight=None, reduction=1, ignore_index=-100)nll_loss_backward() [source]
nll_loss_backward(grad_output, input, target, weight=None, reduction=1, ignore_index=-100, total_weight={})nll_loss_backward() [source]
nll_loss_backward(grad_output, input, target, weight=None, reduction=1, ignore_index=-100, total_weight={})norm() [source]
norm(input, p=2.0)norm() [source]
norm(input, p=2.0)normal() [source]
normal(mean, std)normal() [source]
normal(mean, std)ones() [source]
ones(*size, dtype=tensorplay.float32, device=None, requires_grad=False)ones() [source]
ones(*size, dtype=tensorplay.float32, device=None, requires_grad=False)ones_like() [source]
ones_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)ones_like() [source]
ones_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)permute() [source]
permute(input, dims)permute() [source]
permute(input, dims)permute_backward() [source]
permute_backward(grad_output, input, dims)permute_backward() [source]
permute_backward(grad_output, input, dims)poisson() [source]
poisson(input)poisson() [source]
poisson(input)pow() [source]
pow(input, exponent)pow() [source]
pow(input, exponent)prod() [source]
prod(input, dtype=tensorplay.undefined)prod() [source]
prod(input, dtype=tensorplay.undefined)rand() [source]
rand(*size, dtype=tensorplay.float32, device=None, requires_grad=False)rand() [source]
rand(*size, dtype=tensorplay.float32, device=None, requires_grad=False)rand_like() [source]
rand_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)rand_like() [source]
rand_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)randint() [source]
randint(low, high, size, dtype=tensorplay.int64, device=Ellipsis, requires_grad=False)randint() [source]
randint(low, high, size, dtype=tensorplay.int64, device=Ellipsis, requires_grad=False)randint_like() [source]
randint_like(input, low, high, dtype=tensorplay.undefined, device=None, requires_grad=False)randint_like() [source]
randint_like(input, low, high, dtype=tensorplay.undefined, device=None, requires_grad=False)randn() [source]
randn(*size, dtype=tensorplay.float32, device=None, requires_grad=False)randn() [source]
randn(*size, dtype=tensorplay.float32, device=None, requires_grad=False)randn_like() [source]
randn_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)randn_like() [source]
randn_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)randperm() [source]
randperm(n, dtype=tensorplay.int64, device=Ellipsis, requires_grad=False)randperm() [source]
randperm(n, dtype=tensorplay.int64, device=Ellipsis, requires_grad=False)relu() [source]
relu(input)relu() [source]
relu(input)relu_() [source]
relu_(input)reshape() [source]
reshape(input, shape)reshape() [source]
reshape(input, shape)round() [source]
round(input)round() [source]
round(input)rsqrt() [source]
rsqrt(input)rsqrt() [source]
rsqrt(input)save() [source]
save(obj, f, pickle_module=<module 'pickle' from 'E:\\Miniconda\\Lib\\pickle.py'>, pickle_protocol=2, _use_new_zipfile_serialization=True)Saves an object to a disk file. Supports .tpm (TensorPlay Model) and .safetensors (Safetensors).
seed()
seed(*args, **kwargs)seed(seed: int) -> None
set_printoptions()
set_printoptions(*args, **kwargs)set_printoptions(edge_items: int = -1, threshold: int = -1, precision: int = -1, linewidth: int = -1) -> None
Set print options
sigmoid() [source]
sigmoid(input)sigmoid() [source]
sigmoid(input)sign() [source]
sign(input)sign() [source]
sign(input)silu() [source]
silu(input)silu() [source]
silu(input)sin() [source]
sin(input)sin() [source]
sin(input)sinh() [source]
sinh(input)sinh() [source]
sinh(input)softmax() [source]
softmax(input, dim, dtype=tensorplay.undefined)softmax() [source]
softmax(input, dim, dtype=tensorplay.undefined)split() [source]
split(input, split_size, dim=0)split() [source]
split(input, split_size, dim=0)sqrt() [source]
sqrt(input)sqrt() [source]
sqrt(input)square() [source]
square(input)square() [source]
square(input)squeeze() [source]
squeeze(input)squeeze() [source]
squeeze(input)squeeze_backward() [source]
squeeze_backward(grad_output, input)squeeze_backward() [source]
squeeze_backward(grad_output, input)stack() [source]
stack(tensors, dim=0)stack() [source]
stack(tensors, dim=0)std() [source]
std(input, correction=1)std() [source]
std(input, correction=1)sum() [source]
sum(input, dtype=tensorplay.undefined)sum() [source]
sum(input, dtype=tensorplay.undefined)t() [source]
t(input)t() [source]
t(input)tan() [source]
tan(input)tan() [source]
tan(input)tanh() [source]
tanh(input)tanh() [source]
tanh(input)tensor()
tensor(*args, **kwargs)tensor(data: object, *, dtype: tensorplay.DType | None = None, device: tensorplay.Device | None = None, requires_grad: bool = False) -> tensorplay._C.TensorBase
tensor(data, *, dtype: Optional[DType] = None, device: Optional[Device] = None, requires_grad: bool = False) -> Tensor
tensor()
tensor(*args, **kwargs)tensor(data: object, *, dtype: tensorplay.DType | None = None, device: tensorplay.Device | None = None, requires_grad: bool = False) -> tensorplay._C.TensorBase
tensor(data, *, dtype: Optional[DType] = None, device: Optional[Device] = None, requires_grad: bool = False) -> Tensor
threshold_backward() [source]
threshold_backward(grad_output, output, threshold)threshold_backward() [source]
threshold_backward(grad_output, output, threshold)to_dlpack()
to_dlpack(*args, **kwargs)to_dlpack(obj: object, stream: int | None = None) -> types.CapsuleType
transpose() [source]
transpose(input, dim0, dim1)transpose() [source]
transpose(input, dim0, dim1)unbind() [source]
unbind(input, dim=0)unbind() [source]
unbind(input, dim=0)unsqueeze() [source]
unsqueeze(input, dim)unsqueeze() [source]
unsqueeze(input, dim)var() [source]
var(input, correction=1)var() [source]
var(input, correction=1)zeros() [source]
zeros(*size, dtype=tensorplay.float32, device=None, requires_grad=False)zeros() [source]
zeros(*size, dtype=tensorplay.float32, device=None, requires_grad=False)zeros_like() [source]
zeros_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)zeros_like() [source]
zeros_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)