Skip to content

tensorplay

tensorplay 包提供了一个简单的深度学习框架,专为教育目的和小规模实验而设计。 它定义了一个名为 Tensor 的多维数组数据结构,并封装了相关的数学运算。

它具有 CUDA 对应实现,使你能够在计算能力 >= 3.0 的 NVIDIA GPU 上运行张量计算。

Classes

class DType

python
DType(*values)

Bases: Enum

class Device

python
Device(*args, **kwargs)

class DeviceType

python
DeviceType(*values)

Bases: Enum

class DeviceType

python
DeviceType(*values)

Bases: Enum

class Scalar

python
Scalar(*args, **kwargs)

class Scalar

python
Scalar(*args, **kwargs)

class Size

python
Size(*args, **kwargs)

class Size

python
Size(*args, **kwargs)

class Tensor

python
Tensor(*args, **kwargs)
Methods

cpu(self) [source]

返回此对象在 CPU 内存中的副本。 如果此对象已在 CPU 内存中,则不执行复制并返回原始对象。


cuda(self, device=None, non_blocking=False) [source]

返回此对象在 CUDA 内存中的副本。 如果此对象已在 CUDA 内存中且在正确的设备上,则不执行复制并返回原始对象。


double(self) [source]


flatten(self, start_dim=0, end_dim=-1) [source]

展平维度的连续范围。


float(self) [source]


int(self) [source]


is_float(self) -> bool [source]

检查张量是否为浮点类型。


long(self) [source]


ndimension(self) -> int [source]

dim() 的别名


t(self) [source]

返回张量的转置。 别名为 transpose(0, 1) 以确保正确的自动微分行为 (TransposeBackward)。


unflatten(self, dim, sizes) [source]

在多个维度上扩展输入张量的维度。


class device

python
device(*args, **kwargs)

class dtype

python
dtype(*values)

Bases: Enum

class enable_grad [source]

python
enable_grad(orig_func=None)

Bases: _NoParamDecoratorContextManager

启用梯度计算的上下文管理器。

如果通过 ~no_grad~set_grad_enabled 禁用了梯度计算,则启用它。

此上下文管理器是线程局部的;它不会影响其他线程中的计算。

也可以作为装饰器使用。

INFO

enable_grad 是几种可以在局部启用或禁用梯度的机制之一,有关它们的比较,请参阅 locally-disable-grad-doc

INFO

此 API 不适用于 forward-mode AD <forward-mode-ad>

Example

python
# xdoctest: +SKIP
x = tensorplay.tensor([1.], requires_grad=True)
with tensorplay.no_grad():
    with tensorplay.enable_grad():
        y = x * 2
y.requires_grad
True
y.backward()
x.grad
tensor([2.])
@tensorplay.enable_grad()
def doubler(x):
    return x * 2
with tensorplay.no_grad():
    z = doubler(x)
z.requires_grad
True
@tensorplay.enable_grad()
def tripler(x):
    return x * 3
with tensorplay.no_grad():
    z = tripler(x)
z.requires_grad
True
Methods

clone(self) [source]


class no_grad [source]

python
no_grad() -> None

Bases: _NoParamDecoratorContextManager

禁用梯度计算的上下文管理器。

当你是确定不会调用 Tensor.backward() 时,禁用梯度计算对于推理非常有用。 这将减少计算的内存消耗,否则这些计算将具有 requires_grad=True

在这种模式下,每个计算的结果都将具有 requires_grad=False,即使输入具有 requires_grad=True。 但有一个例外!所有工厂函数,或创建新 Tensor 并接受 requires_grad 关键字参数的函数,都不会受此模式影响。

此上下文管理器是线程局部的;它不会影响其他线程中的计算。

也可以作为装饰器使用。

INFO

No-grad 是几种可以在局部启用或禁用梯度的机制之一,有关它们的比较,请参阅 locally-disable-grad-doc

INFO

此 API 不适用于 forward-mode AD <forward-mode-ad>。 如果你想禁用前向 AD 计算,你可以解包你的对偶张量。

Example

python
x = tensorplay.tensor([1.], requires_grad=True)
with tensorplay.no_grad():
    y = x * 2
y.requires_grad
False
@tensorplay.no_grad()
def doubler(x):
    return x * 2
z = doubler(x)
z.requires_grad
False
@tensorplay.no_grad()
def tripler(x):
    return x * 3
z = tripler(x)
z.requires_grad
False
# factory function exception
with tensorplay.no_grad():
    a = tensorplay.nn.Parameter(tensorplay.rand(10))
a.requires_grad
True
Methods

__init__(self) -> None [source]

Initialize self. See help(type(self)) for accurate signature.


clone(self) [source]


class set_grad_enabled [source]

python
set_grad_enabled(mode: bool) -> None

Bases: _DecoratorContextManager

Context-manager that sets gradient calculation on or off.

set_grad_enabled will enable or disable grads based on its argument mode. It can be used as a context-manager or as a function.

This context manager is thread local; it will not affect computation in other threads.

Args

  • mode (bool): Flag whether to enable grad (True), or disable (False). This can be used to conditionally enable gradients.

INFO

set_grad_enabled is one of several mechanisms that can enable or disable gradients locally see locally-disable-grad-doc for more information on how they compare.

INFO

This API does not apply to forward-mode AD <forward-mode-ad>.

Example

python
# xdoctest: +SKIP
x = tensorplay.tensor([1.], requires_grad=True)
is_train = False
with tensorplay.set_grad_enabled(is_train):
    y = x * 2
y.requires_grad
False
_ = tensorplay.set_grad_enabled(True)
y = x * 2
y.requires_grad
True
_ = tensorplay.set_grad_enabled(False)
y = x * 2
y.requires_grad
False
Methods

__init__(self, mode: bool) -> None [source]

Initialize self. See help(type(self)) for accurate signature.


clone(self) -> 'set_grad_enabled' [source]

Create a copy of this class


Functions

abs() [source]

python
abs(input)

abs() [source]

python
abs(input)

acos() [source]

python
acos(input)

acos() [source]

python
acos(input)

acosh() [source]

python
acosh(input)

acosh() [source]

python
acosh(input)

adaptive_avg_pool2d() [source]

python
adaptive_avg_pool2d(input, output_size)

adaptive_avg_pool2d() [source]

python
adaptive_avg_pool2d(input, output_size)

adaptive_avg_pool2d_backward() [source]

python
adaptive_avg_pool2d_backward(grad_output, input)

adaptive_avg_pool2d_backward() [source]

python
adaptive_avg_pool2d_backward(grad_output, input)

adaptive_max_pool2d() [source]

python
adaptive_max_pool2d(input, output_size)

adaptive_max_pool2d() [source]

python
adaptive_max_pool2d(input, output_size)

adaptive_max_pool2d_backward() [source]

python
adaptive_max_pool2d_backward(grad_output, input)

adaptive_max_pool2d_backward() [source]

python
adaptive_max_pool2d_backward(grad_output, input)

all() [source]

python
all(input)

all() [source]

python
all(input)

allclose() [source]

python
allclose(input, other, rtol=1e-05, atol=1e-08, equal_nan=False)

This function checks if all input and other are close to each other. input: Tensor other: Tensor rtol: float atol: float equal_nan: bool (not supported yet)

angle() [source]

python
angle(input)

angle() [source]

python
angle(input)

any() [source]

python
any(input)

any() [source]

python
any(input)

arange() [source]

python
arange(start, end, step=1, dtype=tensorplay.undefined, device=Ellipsis, requires_grad=False)

arange() [source]

python
arange(start, end, step=1, dtype=tensorplay.undefined, device=Ellipsis, requires_grad=False)

argmax() [source]

python
argmax(input, dim=None, keepdim=False)

argmax() [source]

python
argmax(input, dim=None, keepdim=False)

argmin() [source]

python
argmin(input, dim=None, keepdim=False)

argmin() [source]

python
argmin(input, dim=None, keepdim=False)

as_tensor()

python
as_tensor(*args, **kwargs)

as_tensor(data: object, dtype: tensorplay.DType | None = None, device: tensorplay.Device | None = None) -> object

将数据转换为张量,如果可能,共享数据并保留自动微分历史记录。

as_tensor()

python
as_tensor(*args, **kwargs)

as_tensor(data: object, dtype: tensorplay.DType | None = None, device: tensorplay.Device | None = None) -> object

Converts data into a tensor, sharing data and preserving autograd history if possible.

asin() [source]

python
asin(input)

asin() [source]

python
asin(input)

asinh() [source]

python
asinh(input)

asinh() [source]

python
asinh(input)

atan() [source]

python
atan(input)

atan() [source]

python
atan(input)

atan2() [source]

python
atan2(input, other)

atan2() [source]

python
atan2(input, other)

atanh() [source]

python
atanh(input)

atanh() [source]

python
atanh(input)

avg_pool2d() [source]

python
avg_pool2d(input, kernel_size, stride={}, padding={0}, ceil_mode=False, count_include_pad=True, divisor_override=None)

avg_pool2d() [source]

python
avg_pool2d(input, kernel_size, stride={}, padding={0}, ceil_mode=False, count_include_pad=True, divisor_override=None)

avg_pool2d_backward() [source]

python
avg_pool2d_backward(grad_output, input, kernel_size, stride={}, padding={0}, ceil_mode=False, count_include_pad=True, divisor_override=None)

avg_pool2d_backward() [source]

python
avg_pool2d_backward(grad_output, input, kernel_size, stride={}, padding={0}, ceil_mode=False, count_include_pad=True, divisor_override=None)

batch_norm() [source]

python
batch_norm(input, weight, bias, running_mean, running_var, training, momentum, eps)

batch_norm() [source]

python
batch_norm(input, weight, bias, running_mean, running_var, training, momentum, eps)

batch_norm_backward() [source]

python
batch_norm_backward(grad_output, input, weight=None, running_mean=None, running_var=None, training=True, eps=1e-05)

batch_norm_backward() [source]

python
batch_norm_backward(grad_output, input, weight=None, running_mean=None, running_var=None, training=True, eps=1e-05)

bernoulli() [source]

python
bernoulli(input)

bernoulli() [source]

python
bernoulli(input)

cat() [source]

python
cat(tensors, dim=0)

cat() [source]

python
cat(tensors, dim=0)

ceil() [source]

python
ceil(input)

ceil() [source]

python
ceil(input)

chunk() [source]

python
chunk(input, chunks, dim=0)

chunk() [source]

python
chunk(input, chunks, dim=0)

clamp() [source]

python
clamp(input, min=None, max=None)

clamp() [source]

python
clamp(input, min=None, max=None)

clamp_backward() [source]

python
clamp_backward(grad_output, input, min=None, max=None)

clamp_backward() [source]

python
clamp_backward(grad_output, input, min=None, max=None)

constant_pad_nd() [source]

python
constant_pad_nd(input, pad, value)

constant_pad_nd() [source]

python
constant_pad_nd(input, pad, value)

constant_pad_nd_backward() [source]

python
constant_pad_nd_backward(grad_output, pad)

constant_pad_nd_backward() [source]

python
constant_pad_nd_backward(grad_output, pad)

conv1d() [source]

python
conv1d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)

conv1d() [source]

python
conv1d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)

conv1d_grad_bias() [source]

python
conv1d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)

conv1d_grad_bias() [source]

python
conv1d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)

conv1d_grad_input() [source]

python
conv1d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)

conv1d_grad_input() [source]

python
conv1d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)

conv1d_grad_weight() [source]

python
conv1d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)

conv1d_grad_weight() [source]

python
conv1d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)

conv2d() [source]

python
conv2d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)

conv2d() [source]

python
conv2d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)

conv2d_grad_bias() [source]

python
conv2d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)

conv2d_grad_bias() [source]

python
conv2d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)

conv2d_grad_input() [source]

python
conv2d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)

conv2d_grad_input() [source]

python
conv2d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)

conv2d_grad_weight() [source]

python
conv2d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)

conv2d_grad_weight() [source]

python
conv2d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)

conv3d() [source]

python
conv3d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)

conv3d() [source]

python
conv3d(input, weight, bias={}, stride={1}, padding={0}, dilation={1}, groups=1)

conv3d_grad_bias() [source]

python
conv3d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)

conv3d_grad_bias() [source]

python
conv3d_grad_bias(grad_output, input, weight, stride, padding, dilation, groups)

conv3d_grad_input() [source]

python
conv3d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)

conv3d_grad_input() [source]

python
conv3d_grad_input(grad_output, input, weight, stride, padding, dilation, groups)

conv3d_grad_weight() [source]

python
conv3d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)

conv3d_grad_weight() [source]

python
conv3d_grad_weight(grad_output, input, weight, stride, padding, dilation, groups)

conv_transpose2d() [source]

python
conv_transpose2d(input, weight, bias={}, stride={1}, padding={0}, output_padding={0}, groups=1, dilation={1})

conv_transpose2d() [source]

python
conv_transpose2d(input, weight, bias={}, stride={1}, padding={0}, output_padding={0}, groups=1, dilation={1})

conv_transpose2d_grad_bias() [source]

python
conv_transpose2d_grad_bias(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose2d_grad_bias() [source]

python
conv_transpose2d_grad_bias(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose2d_grad_input() [source]

python
conv_transpose2d_grad_input(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose2d_grad_input() [source]

python
conv_transpose2d_grad_input(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose2d_grad_weight() [source]

python
conv_transpose2d_grad_weight(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose2d_grad_weight() [source]

python
conv_transpose2d_grad_weight(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose3d() [source]

python
conv_transpose3d(input, weight, bias={}, stride={1}, padding={0}, output_padding={0}, groups=1, dilation={1})

conv_transpose3d() [source]

python
conv_transpose3d(input, weight, bias={}, stride={1}, padding={0}, output_padding={0}, groups=1, dilation={1})

conv_transpose3d_grad_bias() [source]

python
conv_transpose3d_grad_bias(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose3d_grad_bias() [source]

python
conv_transpose3d_grad_bias(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose3d_grad_input() [source]

python
conv_transpose3d_grad_input(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose3d_grad_input() [source]

python
conv_transpose3d_grad_input(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose3d_grad_weight() [source]

python
conv_transpose3d_grad_weight(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

conv_transpose3d_grad_weight() [source]

python
conv_transpose3d_grad_weight(grad_output, input, weight, stride, padding, output_padding, groups, dilation)

cos() [source]

python
cos(input)

cos() [source]

python
cos(input)

cosh() [source]

python
cosh(input)

cosh() [source]

python
cosh(input)

embedding() [source]

python
embedding(weight, indices, padding_idx=-1, scale_grad_by_freq=False, sparse=False)

embedding() [source]

python
embedding(weight, indices, padding_idx=-1, scale_grad_by_freq=False, sparse=False)

embedding_dense_backward() [source]

python
embedding_dense_backward(grad_output, indices, num_weights, padding_idx, scale_grad_by_freq)

embedding_dense_backward() [source]

python
embedding_dense_backward(grad_output, indices, num_weights, padding_idx, scale_grad_by_freq)

empty() [source]

python
empty(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

empty() [source]

python
empty(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

empty_like() [source]

python
empty_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

empty_like() [source]

python
empty_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

eq() [source]

python
eq(input, other)

eq() [source]

python
eq(input, other)

exp() [source]

python
exp(input)

exp() [source]

python
exp(input)

eye() [source]

python
eye(n, m=-1, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

eye() [source]

python
eye(n, m=-1, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

floor() [source]

python
floor(input)

floor() [source]

python
floor(input)

from_dlpack()

python
from_dlpack(*args, **kwargs)

from_dlpack(obj: object) -> tensorplay._C.TensorBase

from_dlpack()

python
from_dlpack(*args, **kwargs)

from_dlpack(obj: object) -> tensorplay._C.TensorBase

full() [source]

python
full(size, fill_value, dtype=tensorplay.undefined, device=Ellipsis, requires_grad=False)

full() [source]

python
full(size, fill_value, dtype=tensorplay.undefined, device=Ellipsis, requires_grad=False)

full_like() [source]

python
full_like(input, fill_value, dtype=tensorplay.undefined, device=None, requires_grad=False)

full_like() [source]

python
full_like(input, fill_value, dtype=tensorplay.undefined, device=None, requires_grad=False)

ge() [source]

python
ge(input, other)

ge() [source]

python
ge(input, other)

gelu() [source]

python
gelu(input)

gelu() [source]

python
gelu(input)

group_norm() [source]

python
group_norm(input, num_groups, weight=None, bias=None, eps=1e-05)

group_norm() [source]

python
group_norm(input, num_groups, weight=None, bias=None, eps=1e-05)

group_norm_backward() [source]

python
group_norm_backward(grad_output, input, num_groups, weight=None, bias=None, eps=1e-05)

group_norm_backward() [source]

python
group_norm_backward(grad_output, input, num_groups, weight=None, bias=None, eps=1e-05)

gt() [source]

python
gt(input, other)

gt() [source]

python
gt(input, other)

instance_norm() [source]

python
instance_norm(input, weight=None, bias=None, running_mean=None, running_var=None, use_input_stats=True, momentum=0.1, eps=1e-05)

instance_norm() [source]

python
instance_norm(input, weight=None, bias=None, running_mean=None, running_var=None, use_input_stats=True, momentum=0.1, eps=1e-05)

instance_norm_backward() [source]

python
instance_norm_backward(grad_output, input, weight=None, bias=None, running_mean=None, running_var=None, use_input_stats=True, eps=1e-05)

instance_norm_backward() [source]

python
instance_norm_backward(grad_output, input, weight=None, bias=None, running_mean=None, running_var=None, use_input_stats=True, eps=1e-05)

is_grad_enabled() [source]

python
is_grad_enabled()

layer_norm() [source]

python
layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05)

layer_norm() [source]

python
layer_norm(input, normalized_shape, weight=None, bias=None, eps=1e-05)

layer_norm_backward() [source]

python
layer_norm_backward(grad_output, input, normalized_shape, weight=None, bias=None, eps=1e-05)

layer_norm_backward() [source]

python
layer_norm_backward(grad_output, input, normalized_shape, weight=None, bias=None, eps=1e-05)

le() [source]

python
le(input, other)

le() [source]

python
le(input, other)

lerp() [source]

python
lerp(input, end, weight)

lerp() [source]

python
lerp(input, end, weight)

linspace() [source]

python
linspace(start, end, steps, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

linspace() [source]

python
linspace(start, end, steps, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

load() [source]

python
load(f, map_location=None, pickle_module=<module 'pickle' from 'E:\\Miniconda\\Lib\\pickle.py'>, **pickle_load_args)

Loads an object saved with tensorplay.save() from a file. Supports .tpm, .safetensors, and .pth (via torch).

Args

  • f: a file-like object (has to implement read, readline, tell, and seek), or a string containing a file name.
  • map_location: a function, torch.device, string or a dict specifying how to remap storage locations
  • pickle_module: module used for unpickling metadata and objects
  • pickle_load_args: (optional) keyword arguments passed to pickle_module.load

log() [source]

python
log(input)

log() [source]

python
log(input)

log_softmax() [source]

python
log_softmax(input, dim, dtype=tensorplay.undefined)

log_softmax() [source]

python
log_softmax(input, dim, dtype=tensorplay.undefined)

logspace() [source]

python
logspace(start, end, steps, base=10.0, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

logspace() [source]

python
logspace(start, end, steps, base=10.0, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

lt() [source]

python
lt(input, other)

lt() [source]

python
lt(input, other)

masked_select() [source]

python
masked_select(input, mask)

masked_select() [source]

python
masked_select(input, mask)

matmul() [source]

python
matmul(input, other)

matmul() [source]

python
matmul(input, other)

max() [source]

python
max(input)

max() [source]

python
max(input)

max_pool2d() [source]

python
max_pool2d(input, kernel_size, stride={}, padding={0}, dilation={1}, ceil_mode=False)

max_pool2d() [source]

python
max_pool2d(input, kernel_size, stride={}, padding={0}, dilation={1}, ceil_mode=False)

max_pool2d_backward() [source]

python
max_pool2d_backward(grad_output, input, kernel_size, stride={}, padding={0}, dilation={1}, ceil_mode=False)

max_pool2d_backward() [source]

python
max_pool2d_backward(grad_output, input, kernel_size, stride={}, padding={0}, dilation={1}, ceil_mode=False)

mean() [source]

python
mean(input, dtype=tensorplay.undefined)

mean() [source]

python
mean(input, dtype=tensorplay.undefined)

median() [source]

python
median(input)

median() [source]

python
median(input)

min() [source]

python
min(input)

min() [source]

python
min(input)

mm() [source]

python
mm(input, other)

mm() [source]

python
mm(input, other)

mse_loss() [source]

python
mse_loss(input, target, reduction=1)

mse_loss() [source]

python
mse_loss(input, target, reduction=1)

mse_loss_backward() [source]

python
mse_loss_backward(grad_output, input, target, reduction=1)

mse_loss_backward() [source]

python
mse_loss_backward(grad_output, input, target, reduction=1)

ne() [source]

python
ne(input, other)

ne() [source]

python
ne(input, other)

neg() [source]

python
neg(input)

neg() [source]

python
neg(input)

nll_loss() [source]

python
nll_loss(input, target, weight=None, reduction=1, ignore_index=-100)

nll_loss() [source]

python
nll_loss(input, target, weight=None, reduction=1, ignore_index=-100)

nll_loss_backward() [source]

python
nll_loss_backward(grad_output, input, target, weight=None, reduction=1, ignore_index=-100, total_weight={})

nll_loss_backward() [source]

python
nll_loss_backward(grad_output, input, target, weight=None, reduction=1, ignore_index=-100, total_weight={})

norm() [source]

python
norm(input, p=2.0)

norm() [source]

python
norm(input, p=2.0)

normal() [source]

python
normal(mean, std)

normal() [source]

python
normal(mean, std)

ones() [source]

python
ones(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

ones() [source]

python
ones(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

ones_like() [source]

python
ones_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

ones_like() [source]

python
ones_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

permute() [source]

python
permute(input, dims)

permute() [source]

python
permute(input, dims)

permute_backward() [source]

python
permute_backward(grad_output, input, dims)

permute_backward() [source]

python
permute_backward(grad_output, input, dims)

poisson() [source]

python
poisson(input)

poisson() [source]

python
poisson(input)

pow() [source]

python
pow(input, exponent)

pow() [source]

python
pow(input, exponent)

prod() [source]

python
prod(input, dtype=tensorplay.undefined)

prod() [source]

python
prod(input, dtype=tensorplay.undefined)

rand() [source]

python
rand(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

rand() [source]

python
rand(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

rand_like() [source]

python
rand_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

rand_like() [source]

python
rand_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

randint() [source]

python
randint(low, high, size, dtype=tensorplay.int64, device=Ellipsis, requires_grad=False)

randint() [source]

python
randint(low, high, size, dtype=tensorplay.int64, device=Ellipsis, requires_grad=False)

randint_like() [source]

python
randint_like(input, low, high, dtype=tensorplay.undefined, device=None, requires_grad=False)

randint_like() [source]

python
randint_like(input, low, high, dtype=tensorplay.undefined, device=None, requires_grad=False)

randn() [source]

python
randn(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

randn() [source]

python
randn(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

randn_like() [source]

python
randn_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

randn_like() [source]

python
randn_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

randperm() [source]

python
randperm(n, dtype=tensorplay.int64, device=Ellipsis, requires_grad=False)

randperm() [source]

python
randperm(n, dtype=tensorplay.int64, device=Ellipsis, requires_grad=False)

relu() [source]

python
relu(input)

relu() [source]

python
relu(input)

reshape() [source]

python
reshape(input, shape)

reshape() [source]

python
reshape(input, shape)

round() [source]

python
round(input)

round() [source]

python
round(input)

rsqrt() [source]

python
rsqrt(input)

rsqrt() [source]

python
rsqrt(input)

save() [source]

python
save(obj, f, pickle_module=<module 'pickle' from 'E:\\Miniconda\\Lib\\pickle.py'>, pickle_protocol=2, _use_new_zipfile_serialization=True)

Saves an object to a disk file. Supports .tpm (TensorPlay Model) and .safetensors (Safetensors).

set_printoptions()

python
set_printoptions(*args, **kwargs)

set_printoptions(edge_items: int = -1, threshold: int = -1, precision: int = -1, linewidth: int = -1) -> None

Set print options

sigmoid() [source]

python
sigmoid(input)

sigmoid() [source]

python
sigmoid(input)

sign() [source]

python
sign(input)

sign() [source]

python
sign(input)

silu() [source]

python
silu(input)

silu() [source]

python
silu(input)

sin() [source]

python
sin(input)

sin() [source]

python
sin(input)

sinh() [source]

python
sinh(input)

sinh() [source]

python
sinh(input)

softmax() [source]

python
softmax(input, dim, dtype=tensorplay.undefined)

softmax() [source]

python
softmax(input, dim, dtype=tensorplay.undefined)

split() [source]

python
split(input, split_size, dim=0)

split() [source]

python
split(input, split_size, dim=0)

sqrt() [source]

python
sqrt(input)

sqrt() [source]

python
sqrt(input)

square() [source]

python
square(input)

square() [source]

python
square(input)

squeeze() [source]

python
squeeze(input)

squeeze() [source]

python
squeeze(input)

squeeze_backward() [source]

python
squeeze_backward(grad_output, input)

squeeze_backward() [source]

python
squeeze_backward(grad_output, input)

stack() [source]

python
stack(tensors, dim=0)

stack() [source]

python
stack(tensors, dim=0)

std() [source]

python
std(input, correction=1)

std() [source]

python
std(input, correction=1)

sum() [source]

python
sum(input, dtype=tensorplay.undefined)

sum() [source]

python
sum(input, dtype=tensorplay.undefined)

t() [source]

python
t(input)

t() [source]

python
t(input)

tan() [source]

python
tan(input)

tan() [source]

python
tan(input)

tanh() [source]

python
tanh(input)

tanh() [source]

python
tanh(input)

tensor()

python
tensor(*args, **kwargs)

tensor(data: object, *, dtype: tensorplay.DType | None = None, device: tensorplay.Device | None = None, requires_grad: bool = False) -> tensorplay._C.TensorBase

tensor(data, *, dtype: Optional[DType] = None, device: Optional[Device] = None, requires_grad: bool = False) -> Tensor

tensor()

python
tensor(*args, **kwargs)

tensor(data: object, *, dtype: tensorplay.DType | None = None, device: tensorplay.Device | None = None, requires_grad: bool = False) -> tensorplay._C.TensorBase

tensor(data, *, dtype: Optional[DType] = None, device: Optional[Device] = None, requires_grad: bool = False) -> Tensor

threshold_backward() [source]

python
threshold_backward(grad_output, output, threshold)

threshold_backward() [source]

python
threshold_backward(grad_output, output, threshold)

to_dlpack()

python
to_dlpack(*args, **kwargs)

to_dlpack(obj: object, stream: int | None = None) -> types.CapsuleType

transpose() [source]

python
transpose(input, dim0, dim1)

transpose() [source]

python
transpose(input, dim0, dim1)

unbind() [source]

python
unbind(input, dim=0)

unbind() [source]

python
unbind(input, dim=0)

unsqueeze() [source]

python
unsqueeze(input, dim)

unsqueeze() [source]

python
unsqueeze(input, dim)

var() [source]

python
var(input, correction=1)

var() [source]

python
var(input, correction=1)

zeros() [source]

python
zeros(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

zeros() [source]

python
zeros(size, dtype=tensorplay.float32, device=Ellipsis, requires_grad=False)

zeros_like() [source]

python
zeros_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

zeros_like() [source]

python
zeros_like(input, dtype=tensorplay.undefined, device=None, requires_grad=False)

基于 Apache 2.0 许可发布。

📚DeepWiki