From e7635c06ce15b1e5952b34d4e50018c1c8d545db Mon Sep 17 00:00:00 2001 From: apeltop Date: Mon, 29 Aug 2022 23:32:44 +0000 Subject: [PATCH] Fix typos in docs (#80602) I hope it helps. Pull Request resolved: https://github.com/pytorch/pytorch/pull/80602 Approved by: https://github.com/kit1980 --- docs/cpp/source/notes/tensor_cuda_stream.rst | 4 ++-- docs/source/jit_language_reference_v2.rst | 4 ++-- docs/source/storage.rst | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/docs/cpp/source/notes/tensor_cuda_stream.rst b/docs/cpp/source/notes/tensor_cuda_stream.rst index 9ecf86f51fe..9de4bcc4e27 100644 --- a/docs/cpp/source/notes/tensor_cuda_stream.rst +++ b/docs/cpp/source/notes/tensor_cuda_stream.rst @@ -92,7 +92,7 @@ CUDA Stream Usage Examples // get the default CUDA stream on device 0 at::cuda::CUDAStream defaultStream = at::cuda::getDefaultCUDAStream(); - // set current CUDA stream back to default CUDA stream on devide 0 + // set current CUDA stream back to default CUDA stream on device 0 at::cuda::setCurrentCUDAStream(defaultStream); // sum() on tensor0 uses `defaultStream` as current CUDA stream tensor0.sum(); @@ -120,7 +120,7 @@ CUDA Stream Usage Examples .. attention:: Above code is running on the same CUDA device. `setCurrentCUDAStream` will always set current CUDA stream on current device, - but note that `setCurrentCUDASteram` actually set current stream on the device of passed in CUDA stream. + but note that `setCurrentCUDAStream` actually set current stream on the device of passed in CUDA stream. 2. Acquiring and setting CUDA streams on multiple devices. diff --git a/docs/source/jit_language_reference_v2.rst b/docs/source/jit_language_reference_v2.rst index 4995bb47fdb..7b99c5462af 100644 --- a/docs/source/jit_language_reference_v2.rst +++ b/docs/source/jit_language_reference_v2.rst @@ -906,7 +906,7 @@ Atoms are the most basic elements of expressions. Identifiers """"""""""" -The rules that dictate what is a legal identifer in TorchScript are the same as +The rules that dictate what is a legal identifier in TorchScript are the same as their `Python counterparts `_. Literals @@ -1830,7 +1830,7 @@ Specifically, following APIs are fully supported: - ``torch.distributed.rpc.rpc_async()`` - ``rpc_async()`` makes a non-blocking RPC call to run a function on a remote worker. RPC messages are sent and received in parallel to execution of Python code. - - More deatils about its usage and examples can be found in :meth:`~torch.distributed.rpc.rpc_async`. + - More details about its usage and examples can be found in :meth:`~torch.distributed.rpc.rpc_async`. - ``torch.distributed.rpc.remote()`` - ``remote.()`` executes a remote call on a worker and gets a Remote Reference ``RRef`` as the return value. - More details about its usage and examples can be found in :meth:`~torch.distributed.rpc.remote`. diff --git a/docs/source/storage.rst b/docs/source/storage.rst index 1b2b5d7185a..28cf4444fbc 100644 --- a/docs/source/storage.rst +++ b/docs/source/storage.rst @@ -15,7 +15,7 @@ same class methods that :class:`torch.TypedStorage` has. A :class:`torch.TypedStorage` is a contiguous, one-dimensional array of elements of a particular :class:`torch.dtype`. It can be given any -:class:`torch.dtype`, and the internal data will be interpretted appropriately. +:class:`torch.dtype`, and the internal data will be interpreted appropriately. :class:`torch.TypedStorage` contains a :class:`torch.UntypedStorage` which holds the data as an untyped array of bytes.