tensorplay.cuda
Classes
class Event [source]
Event(enable_timing=False, blocking=False, interprocess=False)Wrapper around a CUDA event.
Methods
__init__(self, enable_timing=False, blocking=False, interprocess=False) [source]
Initialize self. See help(type(self)) for accurate signature.
elapsed_time(self, end_event) [source]
query(self) [source]
record(self, stream=None) [source]
synchronize(self) [source]
wait(self, stream=None) [source]
class Stream [source]
Stream(device=None, priority=0, **kwargs)Wrapper around a CUDA stream.
Methods
__init__(self, device=None, priority=0, **kwargs) [source]
Initialize self. See help(type(self)) for accurate signature.
record_event(self, event=None) [source]
synchronize(self) [source]
wait_event(self, event) [source]
wait_stream(self, stream) [source]
Functions
cudart() [source]
cudart()Returns the ctypes wrapper around the CUDA runtime DLL.
current_device() [source]
current_device() -> intReturn the index of a currently selected device.
get_device_capability() [source]
get_device_capability(device: Union[int, Any, NoneType] = None) -> tupleGets the cuda capability of a device.
get_device_name() [source]
get_device_name(device: Union[int, Any, NoneType] = None) -> strGets the name of a device.
get_device_properties() [source]
get_device_properties(device: Union[int, Any])Gets the properties of a device.
init() [source]
init()Initialize CUDA state. This is lazy initialized so normally not needed.
is_initialized() [source]
is_initialized()Check if CUDA has been initialized.
manual_seed() [source]
manual_seed(seed: int)Sets the seed for generating random numbers for the current GPU.
manual_seed_all() [source]
manual_seed_all(seed: int)Sets the seed for generating random numbers on all GPUs.
max_memory_allocated() [source]
max_memory_allocated(device: Union[int, Any, NoneType] = None) -> intReturns the maximum GPU memory usage by tensors in bytes for a given device.
max_memory_reserved() [source]
max_memory_reserved(device: Union[int, Any, NoneType] = None) -> intReturns the maximum GPU memory managed by the caching allocator in bytes for a given device.
memory_allocated() [source]
memory_allocated(device: Union[int, Any, NoneType] = None) -> intReturns the current GPU memory usage by tensors in bytes for a given device.
memory_reserved() [source]
memory_reserved(device: Union[int, Any, NoneType] = None) -> intReturns the current GPU memory managed by the caching allocator in bytes for a given device.
reset_max_memory_allocated() [source]
reset_max_memory_allocated(device: Union[int, Any, NoneType] = None)Resets the starting point for tracking maximum GPU memory usage.
reset_max_memory_reserved() [source]
reset_max_memory_reserved(device: Union[int, Any, NoneType] = None)Resets the starting point for tracking maximum GPU memory managed by the caching allocator.
set_device() [source]
set_device(device: Union[int, Any])Sets the current device.
synchronize() [source]
synchronize(device: Union[int, Any, NoneType] = None)Waits for all kernels in all streams on a CUDA device to complete.
