Skip to content

tensorplay.cuda

Classes

class Event [source]

python
Event(enable_timing=False, blocking=False, interprocess=False)

Wrapper around a CUDA event.

Methods

__init__(self, enable_timing=False, blocking=False, interprocess=False) [source]

Initialize self. See help(type(self)) for accurate signature.


elapsed_time(self, end_event) [source]


query(self) [source]


record(self, stream=None) [source]


synchronize(self) [source]


wait(self, stream=None) [source]


class Stream [source]

python
Stream(device=None, priority=0, **kwargs)

Wrapper around a CUDA stream.

Methods

__init__(self, device=None, priority=0, **kwargs) [source]

Initialize self. See help(type(self)) for accurate signature.


record_event(self, event=None) [source]


synchronize(self) [source]


wait_event(self, event) [source]


wait_stream(self, stream) [source]


Functions

cudart() [source]

python
cudart()

Returns the ctypes wrapper around the CUDA runtime DLL.

current_device() [source]

python
current_device() -> int

Return the index of a currently selected device.

get_device_capability() [source]

python
get_device_capability(device: Union[int, Any, NoneType] = None) -> tuple

Gets the cuda capability of a device.

get_device_name() [source]

python
get_device_name(device: Union[int, Any, NoneType] = None) -> str

Gets the name of a device.

get_device_properties() [source]

python
get_device_properties(device: Union[int, Any])

Gets the properties of a device.

init() [source]

python
init()

Initialize CUDA state. This is lazy initialized so normally not needed.

is_initialized() [source]

python
is_initialized()

Check if CUDA has been initialized.

manual_seed() [source]

python
manual_seed(seed: int)

Sets the seed for generating random numbers for the current GPU.

manual_seed_all() [source]

python
manual_seed_all(seed: int)

Sets the seed for generating random numbers on all GPUs.

max_memory_allocated() [source]

python
max_memory_allocated(device: Union[int, Any, NoneType] = None) -> int

Returns the maximum GPU memory usage by tensors in bytes for a given device.

max_memory_reserved() [source]

python
max_memory_reserved(device: Union[int, Any, NoneType] = None) -> int

Returns the maximum GPU memory managed by the caching allocator in bytes for a given device.

memory_allocated() [source]

python
memory_allocated(device: Union[int, Any, NoneType] = None) -> int

Returns the current GPU memory usage by tensors in bytes for a given device.

memory_reserved() [source]

python
memory_reserved(device: Union[int, Any, NoneType] = None) -> int

Returns the current GPU memory managed by the caching allocator in bytes for a given device.

reset_max_memory_allocated() [source]

python
reset_max_memory_allocated(device: Union[int, Any, NoneType] = None)

Resets the starting point for tracking maximum GPU memory usage.

reset_max_memory_reserved() [source]

python
reset_max_memory_reserved(device: Union[int, Any, NoneType] = None)

Resets the starting point for tracking maximum GPU memory managed by the caching allocator.

set_device() [source]

python
set_device(device: Union[int, Any])

Sets the current device.

synchronize() [source]

python
synchronize(device: Union[int, Any, NoneType] = None)

Waits for all kernels in all streams on a CUDA device to complete.

Released under the Apache 2.0 License.

📚DeepWiki