tensorplay
tensorplay 包提供了一个简单的深度学习框架,专为教育目的和小规模实验而设计。 它定义了一个名为 Tensor 的多维数组数据结构,并封装了相关的数学运算。
它具有 CUDA 对应实现,使你能够在计算能力 >= 3.0 的 NVIDIA GPU 上运行张量计算。
Classes
class DType
DType(*values)Bases: Enum
class Device
Device(*args, **kwargs)class DeviceType
DeviceType(*values)Bases: Enum
class DeviceType
DeviceType(*values)Bases: Enum
class Scalar
Scalar(*args, **kwargs)class Scalar
Scalar(*args, **kwargs)class Size
Size(*args, **kwargs)class Size
Size(*args, **kwargs)class Tensor
Tensor(*args, **kwargs)Methods
cpu(self) [source]
返回此对象在 CPU 内存中的副本。 如果此对象已在 CPU 内存中,则不执行复制并返回原始对象。
cuda(self, device=None, non_blocking=False) [source]
返回此对象在 CUDA 内存中的副本。 如果此对象已在 CUDA 内存中且在正确的设备上,则不执行复制并返回原始对象。
double(self) [source]
flatten(self, start_dim=0, end_dim=-1) [source]
展平维度的连续范围。
float(self) [source]
int(self) [source]
is_float(self) -> bool [source]
检查张量是否为浮点类型。
long(self) [source]
ndimension(self) -> int [source]
dim() 的别名
t(self) [source]
返回张量的转置。 别名为 transpose(0, 1) 以确保正确的自动微分行为 (TransposeBackward)。
unflatten(self, dim, sizes) [source]
在多个维度上扩展输入张量的维度。
class device
device(*args, **kwargs)class dtype
dtype(*values)Bases: Enum
class enable_grad [source]
enable_grad(orig_func=None)Bases: _NoParamDecoratorContextManager
启用梯度计算的上下文管理器。
如果通过 ~no_grad 或 ~set_grad_enabled 禁用了梯度计算,则启用它。
此上下文管理器是线程局部的;它不会影响其他线程中的计算。
也可以作为装饰器使用。
INFO
enable_grad 是几种可以在局部启用或禁用梯度的机制之一,有关它们的比较,请参阅 locally-disable-grad-doc。
INFO
此 API 不适用于 forward-mode AD <forward-mode-ad>。
Example
# xdoctest: +SKIP
x = tensorplay.tensor([1.], requires_grad=True)
with tensorplay.no_grad():
with tensorplay.enable_grad():
y = x * 2
y.requires_grad
True
y.backward()
x.grad
tensor([2.])
@tensorplay.enable_grad()
def doubler(x):
return x * 2
with tensorplay.no_grad():
z = doubler(x)
z.requires_grad
True
@tensorplay.enable_grad()
def tripler(x):
return x * 3
with tensorplay.no_grad():
z = tripler(x)
z.requires_grad
Trueclass no_grad [source]
no_grad() -> NoneBases: _NoParamDecoratorContextManager
禁用梯度计算的上下文管理器。
当你是确定不会调用 Tensor.backward() 时,禁用梯度计算对于推理非常有用。 这将减少计算的内存消耗,否则这些计算将具有 requires_grad=True。
在这种模式下,每个计算的结果都将具有 requires_grad=False,即使输入具有 requires_grad=True。 但有一个例外!所有工厂函数,或创建新 Tensor 并接受 requires_grad 关键字参数的函数,都不会受此模式影响。
此上下文管理器是线程局部的;它不会影响其他线程中的计算。
也可以作为装饰器使用。
INFO
No-grad 是几种可以在局部启用或禁用梯度的机制之一,有关它们的比较,请参阅 locally-disable-grad-doc。
INFO
此 API 不适用于 forward-mode AD <forward-mode-ad>。 如果你想禁用前向 AD 计算,你可以解包你的对偶张量。
Example
x = tensorplay.tensor([1.], requires_grad=True)
with tensorplay.no_grad():
y = x * 2
y.requires_grad
False
@tensorplay.no_grad()
def doubler(x):
return x * 2
z = doubler(x)
z.requires_grad
False
@tensorplay.no_grad()
def tripler(x):
return x * 3
z = tripler(x)
z.requires_grad
False
# factory function exception
with tensorplay.no_grad():
a = tensorplay.nn.Parameter(tensorplay.rand(10))
a.requires_grad
TrueMethods
__init__(self) -> None [source]
Initialize self. See help(type(self)) for accurate signature.
clone(self) [source]
class set_grad_enabled [source]
set_grad_enabled(mode: bool) -> NoneBases: _DecoratorContextManager
Context-manager that sets gradient calculation on or off.
set_grad_enabled will enable or disable grads based on its argument mode. It can be used as a context-manager or as a function.
This context manager is thread local; it will not affect computation in other threads.
Args
- mode (
bool): Flag whether to enable grad (True), or disable (False). This can be used to conditionally enable gradients.
INFO
set_grad_enabled is one of several mechanisms that can enable or disable gradients locally see locally-disable-grad-doc for more information on how they compare.
INFO
This API does not apply to forward-mode AD <forward-mode-ad>.
Example
# xdoctest: +SKIP
x = tensorplay.tensor([1.], requires_grad=True)
is_train = False
with tensorplay.set_grad_enabled(is_train):
y = x * 2
y.requires_grad
False
_ = tensorplay.set_grad_enabled(True)
y = x * 2
y.requires_grad
True
_ = tensorplay.set_grad_enabled(False)
y = x * 2
y.requires_grad
FalseMethods
__init__(self, mode: bool) -> None [source]
Initialize self. See help(type(self)) for accurate signature.
clone(self) -> 'set_grad_enabled' [source]
Create a copy of this class
Functions
abs() [source]
abs(input)abs() [source]
abs(input)acos() [source]
acos(input)acos() [source]
acos(input)acosh() [source]
acosh(input)acosh() [source]
acosh(input)adaptive_avg_pool2d() [source]
adaptive_avg_pool2d(input, output_size)adaptive_avg_pool2d() [source]
adaptive_avg_pool2d(input, output_size)adaptive_avg_pool2d_backward() [source]
adaptive_avg_pool2d_backward(grad_output, input)adaptive_avg_pool2d_backward() [source]
adaptive_avg_pool2d_backward(grad_output, input)adaptive_max_pool2d() [source]
adaptive_max_pool2d(input, output_size)adaptive_max_pool2d() [source]
adaptive_max_pool2d(input, output_size)adaptive_max_pool2d_backward() [source]
adaptive_max_pool2d_backward(grad_output, input)adaptive_max_pool2d_backward() [source]
adaptive_max_pool2d_backward(grad_output, input)all() [source]
all(input)all() [source]
all(input)allclose() [source]
allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False)This function checks if all input and other are close to each other. input: Tensor other: Tensor rtol: float atol: float equal_nan: bool (not supported yet)
angle() [source]
angle(input)angle() [source]
angle(input)any() [source]
any(input)any() [source]
any(input)arange() [source]
arange(start, end, step=1, dtype=tensorplay.undefined, device=Ellipsis, requires_grad=False)arange() [source]
arange(start, end, step=1, dtype=tensorplay.undefined, device=Ellipsis, requires_grad=False)argmax() [source]
argmax(input, dim=None, keepdim=False)argmax() [source]
argmax(input, dim=None, keepdim=False)argmin() [source]
argmin(input, dim=None, keepdim=False)argmin() [source]
argmin(input, dim=None, keepdim=False)as_tensor()
as_tensor(*args, **kwargs)as_tensor(data: object, dtype: tensorplay.DType | None = None, device: tensorplay.Device | None = None) -> object
将数据转换为张量,如果可能,共享数据并保留自动微分历史记录。
as_tensor()
as_tensor(*args, **kwargs)as_tensor(data: object, dtype: tensorplay.DType | None = None, device: tensorplay.Device | None = None) -> object
Converts data into a tensor, sharing data and preserving autograd history if possible.
asin() [source]
asin(input)asin() [source]
asin(input)asinh() [source]
asinh(input)asinh() [source]
asinh(input)atan() [source]
atan(input)atan() [source]
atan(input)atan2() [source]
atan2(input, other)atan2() [source]
atan2(input, other)atanh() [source]
atanh(input)atanh() [source]
atanh(input)avg_pool2d() [source]
avg_pool2d(input, kernel_size, stride={}, padding={0}, ceil_mode=False, count_include_pad=True, divisor_override=None)avg_pool2d() [source]
avg_pool2d(input, kernel_size, stride={}, padding={0}, ceil_mode=False, count_include_pad=True, divisor_override=None)avg_pool2d_backward() [source]
avg_pool2d_backward(grad_output, input, kernel_size, stride={}, padding={0}, ceil_mode=False, count_include_pad=True, divisor_override=None)avg_pool2d_backward() [source]
avg_pool2d_backward(grad_output, input, kernel_size, stride={}, padding={0}, ceil_mode=False, count_include_pad=True, divisor_override=None)batch_norm() [source]
batch_norm(input, weight, bias, running_mean, running_var, training, momentum, eps)batch_norm() [source]
batch_norm(input, weight, bias, running_mean, running_var, training, momentum, eps)batch_norm_backward() [source]
batch_norm_backward(grad_output, input, weight=None, running_mean=None, running_var=None, training=True, eps=1e-05)batch_norm_backward() [source]
batch_norm_backward(grad_output, input, weight=None, running_mean=None, running_var=None, training=True, eps=1e-05)bernoulli() [source]
bernoulli(input)bernoulli() [source]
bernoulli(input)cat() [source]
cat(tensors, dim=0)cat() [source]
cat(tensors, dim=0)ceil() [source]
ceil(input)ceil() [source]
ceil(input)chunk() [source]
chunk(input, chunks, dim=0)chunk() [source]
chunk(input, chunks, dim=0)clamp() [source]
clamp(input, min=None, max=None)clamp() [source]
clamp(input, min=None, max=None)clamp_backward() [source]
clamp_backward(grad_output, input, min=None, max=None)clamp_backward() [source]
clamp_backward(grad_output, input, min=None, max=None)constant_pad_nd() [source]
constant_pad_nd(input, pad, value)constant_pad_nd() [source]
constant_pad_nd(input, pad, value)constant_pad_nd_backward() [source]
constant_pad_nd_backward(grad_output, pad)constant_pad_nd_backward() [source]
constant_pad_nd_backward(grad_output, pad)conv1d() [source]
conv1d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)conv1d() [source]
conv1d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)conv1d_grad_bias() [source]
conv1d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)conv1d_grad_bias() [source]
conv1d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)conv1d_grad_input() [source]
conv1d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)conv1d_grad_input() [source]
conv1d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)conv1d_grad_weight() [source]
conv1d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)conv1d_grad_weight() [source]
conv1d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)conv2d() [source]
conv2d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)conv2d() [source]
conv2d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)conv2d_grad_bias() [source]
conv2d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)conv2d_grad_bias() [source]
conv2d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)conv2d_grad_input() [source]
conv2d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)conv2d_grad_input() [source]
conv2d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)conv2d_grad_weight() [source]
conv2d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)conv2d_grad_weight() [source]
conv2d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)conv3d() [source]
conv3d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)conv3d() [source]
conv3d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)conv3d_grad_bias() [source]
conv3d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)conv3d_grad_bias() [source]
conv3d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)conv3d_grad_input() [source]
conv3d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)conv3d_grad_input() [source]
conv3d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)conv3d_grad_weight() [source]
conv3d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)conv3d_grad_weight() [source]
conv3d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)conv_transpose2d() [source]
conv_transpose2d(input, weight, bias={}, stride={1}, padding={0}, output_padding={0}, groups=1, dilation={1})conv_transpose2d() [source]
conv_transpose2d(input, weight, bias={}, stride={1}, padding={0}, output_padding={0}, groups=1, dilation={1})conv_transpose2d_grad_bias() [source]
conv_transpose2d_grad_bias(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose2d_grad_bias() [source]
conv_transpose2d_grad_bias(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose2d_grad_input() [source]
conv_transpose2d_grad_input(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose2d_grad_input() [source]
conv_transpose2d_grad_input(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose2d_grad_weight() [source]
conv_transpose2d_grad_weight(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose2d_grad_weight() [source]
conv_transpose2d_grad_weight(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose3d() [source]
conv_transpose3d(input, weight, bias={}, stride={1}, padding={0}, output_padding={0}, groups=1, dilation={1})conv_transpose3d() [source]
conv_transpose3d(input, weight, bias={}, stride={1}, padding={0}, output_padding={0}, groups=1, dilation={1})conv_transpose3d_grad_bias() [source]
conv_transpose3d_grad_bias(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose3d_grad_bias() [source]
conv_transpose3d_grad_bias(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose3d_grad_input() [source]
conv_transpose3d_grad_input(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose3d_grad_input() [source]
conv_transpose3d_grad_input(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose3d_grad_weight() [source]
conv_transpose3d_grad_weight(grad_output, input, weight, stride, padding, output_padding, groups, dilation)conv_transpose3d_grad_weight() [source]
conv_transpose3d_grad_weight(grad_output, input, weight, stride, padding, output_padding, groups, dilation)cos() [source]
cos(input)cos() [source]
cos(input)cosh() [source]
cosh(input)cosh() [source]
cosh(input)embedding() [source]
embedding(weight, indices, padding_idx=-1, scale_grad_by_freq=False, sparse=False)embedding() [source]
embedding(weight, indices, padding_idx=-1, scale_grad_by_freq=False, sparse=False)embedding_dense_backward() [source]
embedding_dense_backward(grad_output, indices, num_weights, padding_idx, scale_grad_by_freq)embedding_dense_backward() [source]
embedding_dense_backward(grad_output, indices, num_weights, padding_idx, scale_grad_by_freq)empty() [source]
empty(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)empty() [source]
empty(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)empty_like() [source]
empty_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)empty_like() [source]
empty_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)eq() [source]
eq(input, other)eq() [source]
eq(input, other)exp() [source]
exp(input)exp() [source]
exp(input)eye() [source]
eye(n, m=-1, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)eye() [source]
eye(n, m=-1, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)floor() [source]
floor(input)floor() [source]
floor(input)from_dlpack()
from_dlpack(*args, **kwargs)from_dlpack(obj: object) -> tensorplay._C.TensorBase
from_dlpack()
from_dlpack(*args, **kwargs)from_dlpack(obj: object) -> tensorplay._C.TensorBase
full() [source]
full(size, fill_value, dtype=tensorplay.undefined, device=Ellipsis, requires_grad=False)full() [source]
full(size, fill_value, dtype=tensorplay.undefined, device=Ellipsis, requires_grad=False)full_like() [source]
full_like(input, fill_value, dtype=tensorplay.undefined, device=None, requires_grad=False)full_like() [source]
full_like(input, fill_value, dtype=tensorplay.undefined, device=None, requires_grad=False)ge() [source]
ge(input, other)ge() [source]
ge(input, other)gelu() [source]
gelu(input)gelu() [source]
gelu(input)group_norm() [source]
group_norm(input, num_groups, weight=None, bias=None, eps=1e-05)group_norm() [source]
group_norm(input, num_groups, weight=None, bias=None, eps=1e-05)group_norm_backward() [source]
group_norm_backward(grad_output, input, num_groups, weight=None, bias=None, eps=1e-05)group_norm_backward() [source]
group_norm_backward(grad_output, input, num_groups, weight=None, bias=None, eps=1e-05)gt() [source]
gt(input, other)gt() [source]
gt(input, other)instance_norm() [source]
instance_norm(input, weight=None, bias=None, running_mean=None, running_var=None, use_input_stats=True, momentum=0.1, eps=1e-05)instance_norm() [source]
instance_norm(input, weight=None, bias=None, running_mean=None, running_var=None, use_input_stats=True, momentum=0.1, eps=1e-05)instance_norm_backward() [source]
instance_norm_backward(grad_output, input, weight=None, bias=None, running_mean=None, running_var=None, use_input_stats=True, eps=1e-05)instance_norm_backward() [source]
instance_norm_backward(grad_output, input, weight=None, bias=None, running_mean=None, running_var=None, use_input_stats=True, eps=1e-05)is_grad_enabled() [source]
is_grad_enabled()layer_norm() [source]
layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05)layer_norm() [source]
layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05)layer_norm_backward() [source]
layer_norm_backward(grad_output, input, normalized_shape, weight=None, bias=None, eps=1e-05)layer_norm_backward() [source]
layer_norm_backward(grad_output, input, normalized_shape, weight=None, bias=None, eps=1e-05)le() [source]
le(input, other)le() [source]
le(input, other)lerp() [source]
lerp(input, end, weight)lerp() [source]
lerp(input, end, weight)linspace() [source]
linspace(start, end, steps, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)linspace() [source]
linspace(start, end, steps, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)load() [source]
load(f, map_location=None, pickle_module=<module 'pickle' from 'E:\\Miniconda\\Lib\\pickle.py'>, **pickle_load_args)Loads an object saved with tensorplay.save() from a file. Supports .tpm, .safetensors, and .pth (via torch).
Args
- f: a file-like object (has to implement read, readline, tell, and seek), or a string containing a file name.
- map_location: a function, torch.device, string or a dict specifying how to remap storage locations
- pickle_module: module used for unpickling metadata and objects
- pickle_load_args: (optional) keyword arguments passed to pickle_module.load
log() [source]
log(input)log() [source]
log(input)log_softmax() [source]
log_softmax(input, dim, dtype=tensorplay.undefined)log_softmax() [source]
log_softmax(input, dim, dtype=tensorplay.undefined)logspace() [source]
logspace(start, end, steps, base=10.0, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)logspace() [source]
logspace(start, end, steps, base=10.0, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)lt() [source]
lt(input, other)lt() [source]
lt(input, other)masked_select() [source]
masked_select(input, mask)masked_select() [source]
masked_select(input, mask)matmul() [source]
matmul(input, other)matmul() [source]
matmul(input, other)max() [source]
max(input)max() [source]
max(input)max_pool2d() [source]
max_pool2d(input, kernel_size, stride={}, padding={0}, dilation={1}, ceil_mode=False)max_pool2d() [source]
max_pool2d(input, kernel_size, stride={}, padding={0}, dilation={1}, ceil_mode=False)max_pool2d_backward() [source]
max_pool2d_backward(grad_output, input, kernel_size, stride={}, padding={0}, dilation={1}, ceil_mode=False)max_pool2d_backward() [source]
max_pool2d_backward(grad_output, input, kernel_size, stride={}, padding={0}, dilation={1}, ceil_mode=False)mean() [source]
mean(input, dtype=tensorplay.undefined)mean() [source]
mean(input, dtype=tensorplay.undefined)median() [source]
median(input)median() [source]
median(input)min() [source]
min(input)min() [source]
min(input)mm() [source]
mm(input, other)mm() [source]
mm(input, other)mse_loss() [source]
mse_loss(input, target, reduction=1)mse_loss() [source]
mse_loss(input, target, reduction=1)mse_loss_backward() [source]
mse_loss_backward(grad_output, input, target, reduction=1)mse_loss_backward() [source]
mse_loss_backward(grad_output, input, target, reduction=1)ne() [source]
ne(input, other)ne() [source]
ne(input, other)neg() [source]
neg(input)neg() [source]
neg(input)nll_loss() [source]
nll_loss(input, target, weight=None, reduction=1, ignore_index=-100)nll_loss() [source]
nll_loss(input, target, weight=None, reduction=1, ignore_index=-100)nll_loss_backward() [source]
nll_loss_backward(grad_output, input, target, weight=None, reduction=1, ignore_index=-100, total_weight={})nll_loss_backward() [source]
nll_loss_backward(grad_output, input, target, weight=None, reduction=1, ignore_index=-100, total_weight={})norm() [source]
norm(input, p=2.0)norm() [source]
norm(input, p=2.0)normal() [source]
normal(mean, std)normal() [source]
normal(mean, std)ones() [source]
ones(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)ones() [source]
ones(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)ones_like() [source]
ones_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)ones_like() [source]
ones_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)permute() [source]
permute(input, dims)permute() [source]
permute(input, dims)permute_backward() [source]
permute_backward(grad_output, input, dims)permute_backward() [source]
permute_backward(grad_output, input, dims)poisson() [source]
poisson(input)poisson() [source]
poisson(input)pow() [source]
pow(input, exponent)pow() [source]
pow(input, exponent)prod() [source]
prod(input, dtype=tensorplay.undefined)prod() [source]
prod(input, dtype=tensorplay.undefined)rand() [source]
rand(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)rand() [source]
rand(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)rand_like() [source]
rand_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)rand_like() [source]
rand_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)randint() [source]
randint(low, high, size, dtype=tensorplay.int64, device=Ellipsis, requires_grad=False)randint() [source]
randint(low, high, size, dtype=tensorplay.int64, device=Ellipsis, requires_grad=False)randint_like() [source]
randint_like(input, low, high, dtype=tensorplay.undefined, device=None, requires_grad=False)randint_like() [source]
randint_like(input, low, high, dtype=tensorplay.undefined, device=None, requires_grad=False)randn() [source]
randn(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)randn() [source]
randn(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)randn_like() [source]
randn_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)randn_like() [source]
randn_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)randperm() [source]
randperm(n, dtype=tensorplay.int64, device=Ellipsis, requires_grad=False)randperm() [source]
randperm(n, dtype=tensorplay.int64, device=Ellipsis, requires_grad=False)relu() [source]
relu(input)relu() [source]
relu(input)reshape() [source]
reshape(input, shape)reshape() [source]
reshape(input, shape)round() [source]
round(input)round() [source]
round(input)rsqrt() [source]
rsqrt(input)rsqrt() [source]
rsqrt(input)save() [source]
save(obj, f, pickle_module=<module 'pickle' from 'E:\\Miniconda\\Lib\\pickle.py'>, pickle_protocol=2, _use_new_zipfile_serialization=True)Saves an object to a disk file. Supports .tpm (TensorPlay Model) and .safetensors (Safetensors).
set_printoptions()
set_printoptions(*args, **kwargs)set_printoptions(edge_items: int = -1, threshold: int = -1, precision: int = -1, linewidth: int = -1) -> None
Set print options
sigmoid() [source]
sigmoid(input)sigmoid() [source]
sigmoid(input)sign() [source]
sign(input)sign() [source]
sign(input)silu() [source]
silu(input)silu() [source]
silu(input)sin() [source]
sin(input)sin() [source]
sin(input)sinh() [source]
sinh(input)sinh() [source]
sinh(input)softmax() [source]
softmax(input, dim, dtype=tensorplay.undefined)softmax() [source]
softmax(input, dim, dtype=tensorplay.undefined)split() [source]
split(input, split_size, dim=0)split() [source]
split(input, split_size, dim=0)sqrt() [source]
sqrt(input)sqrt() [source]
sqrt(input)square() [source]
square(input)square() [source]
square(input)squeeze() [source]
squeeze(input)squeeze() [source]
squeeze(input)squeeze_backward() [source]
squeeze_backward(grad_output, input)squeeze_backward() [source]
squeeze_backward(grad_output, input)stack() [source]
stack(tensors, dim=0)stack() [source]
stack(tensors, dim=0)std() [source]
std(input, correction=1)std() [source]
std(input, correction=1)sum() [source]
sum(input, dtype=tensorplay.undefined)sum() [source]
sum(input, dtype=tensorplay.undefined)t() [source]
t(input)t() [source]
t(input)tan() [source]
tan(input)tan() [source]
tan(input)tanh() [source]
tanh(input)tanh() [source]
tanh(input)tensor()
tensor(*args, **kwargs)tensor(data: object, *, dtype: tensorplay.DType | None = None, device: tensorplay.Device | None = None, requires_grad: bool = False) -> tensorplay._C.TensorBase
tensor(data, *, dtype: Optional[DType] = None, device: Optional[Device] = None, requires_grad: bool = False) -> Tensor
tensor()
tensor(*args, **kwargs)tensor(data: object, *, dtype: tensorplay.DType | None = None, device: tensorplay.Device | None = None, requires_grad: bool = False) -> tensorplay._C.TensorBase
tensor(data, *, dtype: Optional[DType] = None, device: Optional[Device] = None, requires_grad: bool = False) -> Tensor
threshold_backward() [source]
threshold_backward(grad_output, output, threshold)threshold_backward() [source]
threshold_backward(grad_output, output, threshold)to_dlpack()
to_dlpack(*args, **kwargs)to_dlpack(obj: object, stream: int | None = None) -> types.CapsuleType
transpose() [source]
transpose(input, dim0, dim1)transpose() [source]
transpose(input, dim0, dim1)unbind() [source]
unbind(input, dim=0)unbind() [source]
unbind(input, dim=0)unsqueeze() [source]
unsqueeze(input, dim)unsqueeze() [source]
unsqueeze(input, dim)var() [source]
var(input, correction=1)var() [source]
var(input, correction=1)zeros() [source]
zeros(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)zeros() [source]
zeros(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)zeros_like() [source]
zeros_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)zeros_like() [source]
zeros_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)