From 7d2f1cd2115ec333767aef8087c8ea3ba6e90ea5 Mon Sep 17 00:00:00 2001 From: Kazuaki Ishizaki Date: Mon, 31 Oct 2022 19:31:56 +0000 Subject: [PATCH] Fix typos under docs directory (#88033) This PR fixes typos in `.rst` and `.Doxyfile` files under docs directory Pull Request resolved: https://github.com/pytorch/pytorch/pull/88033 Approved by: https://github.com/soulitzer --- docs/caffe2/.Doxyfile-c | 2 +- docs/caffe2/.Doxyfile-python | 2 +- docs/cpp/source/notes/tensor_cuda_stream.rst | 2 +- docs/source/cuda._sanitizer.rst | 2 +- docs/source/data.rst | 2 +- docs/source/fx.rst | 2 +- docs/source/quantization-support.rst | 2 +- docs/source/quantization.rst | 2 +- 8 files changed, 8 insertions(+), 8 deletions(-) diff --git a/docs/caffe2/.Doxyfile-c b/docs/caffe2/.Doxyfile-c index c4873d63841..b30ab661d24 100644 --- a/docs/caffe2/.Doxyfile-c +++ b/docs/caffe2/.Doxyfile-c @@ -1490,7 +1490,7 @@ EXT_LINKS_IN_WINDOW = NO FORMULA_FONTSIZE = 10 -# Use the FORMULA_TRANPARENT tag to determine whether or not the images +# Use the FORMULA_TRANSPARENT tag to determine whether or not the images # generated for formulas are transparent PNGs. Transparent PNGs are not # supported properly for IE 6.0, but are supported on all modern browsers. # diff --git a/docs/caffe2/.Doxyfile-python b/docs/caffe2/.Doxyfile-python index 9d16671ffe3..514e5803639 100644 --- a/docs/caffe2/.Doxyfile-python +++ b/docs/caffe2/.Doxyfile-python @@ -1488,7 +1488,7 @@ EXT_LINKS_IN_WINDOW = NO FORMULA_FONTSIZE = 10 -# Use the FORMULA_TRANPARENT tag to determine whether or not the images +# Use the FORMULA_TRANSPARENT tag to determine whether or not the images # generated for formulas are transparent PNGs. Transparent PNGs are not # supported properly for IE 6.0, but are supported on all modern browsers. # diff --git a/docs/cpp/source/notes/tensor_cuda_stream.rst b/docs/cpp/source/notes/tensor_cuda_stream.rst index bdb66361d9a..49403177136 100644 --- a/docs/cpp/source/notes/tensor_cuda_stream.rst +++ b/docs/cpp/source/notes/tensor_cuda_stream.rst @@ -144,7 +144,7 @@ CUDA Stream Usage Examples // sum() on tensor0 use `myStream0` as current CUDA stream on device 0 tensor0.sum(); - // change the current device index to 1 by using CUDA device guard within a braket scope + // change the current device index to 1 by using CUDA device guard within a bracket scope { at::cuda::CUDAGuard device_guard{1}; // create a tensor on device 1 diff --git a/docs/source/cuda._sanitizer.rst b/docs/source/cuda._sanitizer.rst index 097d26a324f..658b9756931 100644 --- a/docs/source/cuda._sanitizer.rst +++ b/docs/source/cuda._sanitizer.rst @@ -29,7 +29,7 @@ Here is an example of a simple synchronization error in PyTorch: The ``a`` tensor is initialized on the default stream and, without any synchronization methods, modified on a new stream. The two kernels will run concurrently on the same tensor, -which might cause the second kernel to read unitialized data before the first one was able +which might cause the second kernel to read uninitialized data before the first one was able to write it, or the first kernel might overwrite part of the result of the second. When this script is run on the commandline with: :: diff --git a/docs/source/data.rst b/docs/source/data.rst index db6957c8da7..de2d44920f5 100644 --- a/docs/source/data.rst +++ b/docs/source/data.rst @@ -65,7 +65,7 @@ in real time. See :class:`~torch.utils.data.IterableDataset` for more details. -.. note:: When using an :class:`~torch.utils.data.IterableDataset` with +.. note:: When using a :class:`~torch.utils.data.IterableDataset` with `multi-process data loading `_. The same dataset object is replicated on each worker process, and thus the replicas must be configured differently to avoid duplicated data. See diff --git a/docs/source/fx.rst b/docs/source/fx.rst index 988ae081125..664fee10c67 100644 --- a/docs/source/fx.rst +++ b/docs/source/fx.rst @@ -36,7 +36,7 @@ What is an FX transform? Essentially, it's a function that looks like this. # Step 3: Construct a Module to return return torch.fx.GraphModule(m, graph) -Your transform will take in an :class:`torch.nn.Module`, acquire a :class:`Graph` +Your transform will take in a :class:`torch.nn.Module`, acquire a :class:`Graph` from it, do some modifications, and return a new :class:`torch.nn.Module`. You should think of the :class:`torch.nn.Module` that your FX transform returns as identical to a regular :class:`torch.nn.Module` -- you can pass it to another diff --git a/docs/source/quantization-support.rst b/docs/source/quantization-support.rst index 681e25b1172..d57a4b822f5 100644 --- a/docs/source/quantization-support.rst +++ b/docs/source/quantization-support.rst @@ -529,7 +529,7 @@ Quantized dtypes and quantization schemes Note that operator implementations currently only support per channel quantization for weights of the **conv** and **linear** operators. Furthermore, the input data is -mapped linearly to the the quantized data and vice versa +mapped linearly to the quantized data and vice versa as follows: .. math:: diff --git a/docs/source/quantization.rst b/docs/source/quantization.rst index 55fa6b0c604..4b87e8b1815 100644 --- a/docs/source/quantization.rst +++ b/docs/source/quantization.rst @@ -354,7 +354,7 @@ QAT API Example:: # attach a global qconfig, which contains information about what kind # of observers to attach. Use 'fbgemm' for server inference and # 'qnnpack' for mobile inference. Other quantization configurations such - # as selecting symmetric or assymetric quantization and MinMax or L2Norm + # as selecting symmetric or asymmetric quantization and MinMax or L2Norm # calibration techniques can be specified here. model_fp32.qconfig = torch.quantization.get_default_qat_qconfig('fbgemm')