Files
pytorch/c10/cuda/driver_api.cpp

Ignoring revisions in .git-blame-ignore-revs. Click here to bypass and see the normal blame view.

47 lines
1.3 KiB
C++
Raw Normal View History

#if !defined(USE_ROCM) && defined(PYTORCH_C10_DRIVER_API_SUPPORTED)
#include <c10/cuda/driver_api.h>
fix missing nvml in c10/cuda/driver_api.cpp issue (#112121) Since https://github.com/pytorch/pytorch/pull/99699 introduced a dependency on nvml for oom reporting in `c10/cuda/driver_api.h`, `c10/cuda/driver_api.cpp`, and `reportProcessMemoryInfo` from `c10/cuda/CUDACachingAllocator.cpp`, we've seen failures regarding cuda expandable segments and oom reporting in NVIDIA's internal CI, specifically on Jetson devices which don't have nvml support as it is incompatible with Jetson. Example failures using the latest upstream on Orin AGX node: `python test/test_cuda.py -k test_notifies_oom` generates ``` Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/opt/pytorch/pytorch/test/test_cuda.py", line 1643, in _worker results[t] = torch.nn.functional.conv2d(results[t], weight, padding=0) RuntimeError: CUDA driver error: out of memory ``` `python test/test_cuda_expandable_segments.py` generates ``` Traceback (most recent call last): File "/opt/pytorch/pytorch/test/test_cuda_expandable_segments.py", line 12, in <module> exec(compile(open(filepath).read(), filepath, mode='exec')) File "/opt/pytorch/pytorch/test/test_cuda.py", line 66, in <module> class TestCuda(TestCase): File "/opt/pytorch/pytorch/test/test_cuda.py", line 1609, in TestCuda @unittest.skipIf(not TEST_CUDNN, 'CUDNN not available') File "/usr/local/lib/python3.10/dist-packages/torch/testing/_internal/common_utils.py", line 4628, in wrapped self._value = self._cb() File "/usr/local/lib/python3.10/dist-packages/torch/testing/_internal/common_cuda.py", line 20, in <lambda> TEST_CUDNN = LazyVal(lambda: TEST_CUDA and torch.backends.cudnn.is_acceptable(torch.tensor(1., device=CUDA_DEVICE))) RuntimeError: handle_0 INTERNAL ASSERT FAILED at "/opt/pytorch/pytorch/c10/cuda/driver_api.cpp":15, please report a bug to PyTorch. ``` This PR intends to fix this issue by adding various dlopen checks to make sure nvml actually exists, and safely fall back to using the older libcuda based features of cuda expandable segments and oom reporting if nvml is not found. Pull Request resolved: https://github.com/pytorch/pytorch/pull/112121 Approved by: https://github.com/eqy, https://github.com/ngimel, https://github.com/albanD
2023-11-02 21:28:02 +00:00
#include <c10/util/CallOnce.h>
#include <c10/util/Exception.h>
#include <dlfcn.h>
namespace c10::cuda {
namespace {
fix missing nvml in c10/cuda/driver_api.cpp issue (#112121) Since https://github.com/pytorch/pytorch/pull/99699 introduced a dependency on nvml for oom reporting in `c10/cuda/driver_api.h`, `c10/cuda/driver_api.cpp`, and `reportProcessMemoryInfo` from `c10/cuda/CUDACachingAllocator.cpp`, we've seen failures regarding cuda expandable segments and oom reporting in NVIDIA's internal CI, specifically on Jetson devices which don't have nvml support as it is incompatible with Jetson. Example failures using the latest upstream on Orin AGX node: `python test/test_cuda.py -k test_notifies_oom` generates ``` Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/opt/pytorch/pytorch/test/test_cuda.py", line 1643, in _worker results[t] = torch.nn.functional.conv2d(results[t], weight, padding=0) RuntimeError: CUDA driver error: out of memory ``` `python test/test_cuda_expandable_segments.py` generates ``` Traceback (most recent call last): File "/opt/pytorch/pytorch/test/test_cuda_expandable_segments.py", line 12, in <module> exec(compile(open(filepath).read(), filepath, mode='exec')) File "/opt/pytorch/pytorch/test/test_cuda.py", line 66, in <module> class TestCuda(TestCase): File "/opt/pytorch/pytorch/test/test_cuda.py", line 1609, in TestCuda @unittest.skipIf(not TEST_CUDNN, 'CUDNN not available') File "/usr/local/lib/python3.10/dist-packages/torch/testing/_internal/common_utils.py", line 4628, in wrapped self._value = self._cb() File "/usr/local/lib/python3.10/dist-packages/torch/testing/_internal/common_cuda.py", line 20, in <lambda> TEST_CUDNN = LazyVal(lambda: TEST_CUDA and torch.backends.cudnn.is_acceptable(torch.tensor(1., device=CUDA_DEVICE))) RuntimeError: handle_0 INTERNAL ASSERT FAILED at "/opt/pytorch/pytorch/c10/cuda/driver_api.cpp":15, please report a bug to PyTorch. ``` This PR intends to fix this issue by adding various dlopen checks to make sure nvml actually exists, and safely fall back to using the older libcuda based features of cuda expandable segments and oom reporting if nvml is not found. Pull Request resolved: https://github.com/pytorch/pytorch/pull/112121 Approved by: https://github.com/eqy, https://github.com/ngimel, https://github.com/albanD
2023-11-02 21:28:02 +00:00
DriverAPI create_driver_api() {
void* handle_0 = dlopen("libcuda.so.1", RTLD_LAZY | RTLD_NOLOAD);
TORCH_CHECK(handle_0, "Can't open libcuda.so.1: ", dlerror());
fix missing nvml in c10/cuda/driver_api.cpp issue (#112121) Since https://github.com/pytorch/pytorch/pull/99699 introduced a dependency on nvml for oom reporting in `c10/cuda/driver_api.h`, `c10/cuda/driver_api.cpp`, and `reportProcessMemoryInfo` from `c10/cuda/CUDACachingAllocator.cpp`, we've seen failures regarding cuda expandable segments and oom reporting in NVIDIA's internal CI, specifically on Jetson devices which don't have nvml support as it is incompatible with Jetson. Example failures using the latest upstream on Orin AGX node: `python test/test_cuda.py -k test_notifies_oom` generates ``` Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/opt/pytorch/pytorch/test/test_cuda.py", line 1643, in _worker results[t] = torch.nn.functional.conv2d(results[t], weight, padding=0) RuntimeError: CUDA driver error: out of memory ``` `python test/test_cuda_expandable_segments.py` generates ``` Traceback (most recent call last): File "/opt/pytorch/pytorch/test/test_cuda_expandable_segments.py", line 12, in <module> exec(compile(open(filepath).read(), filepath, mode='exec')) File "/opt/pytorch/pytorch/test/test_cuda.py", line 66, in <module> class TestCuda(TestCase): File "/opt/pytorch/pytorch/test/test_cuda.py", line 1609, in TestCuda @unittest.skipIf(not TEST_CUDNN, 'CUDNN not available') File "/usr/local/lib/python3.10/dist-packages/torch/testing/_internal/common_utils.py", line 4628, in wrapped self._value = self._cb() File "/usr/local/lib/python3.10/dist-packages/torch/testing/_internal/common_cuda.py", line 20, in <lambda> TEST_CUDNN = LazyVal(lambda: TEST_CUDA and torch.backends.cudnn.is_acceptable(torch.tensor(1., device=CUDA_DEVICE))) RuntimeError: handle_0 INTERNAL ASSERT FAILED at "/opt/pytorch/pytorch/c10/cuda/driver_api.cpp":15, please report a bug to PyTorch. ``` This PR intends to fix this issue by adding various dlopen checks to make sure nvml actually exists, and safely fall back to using the older libcuda based features of cuda expandable segments and oom reporting if nvml is not found. Pull Request resolved: https://github.com/pytorch/pytorch/pull/112121 Approved by: https://github.com/eqy, https://github.com/ngimel, https://github.com/albanD
2023-11-02 21:28:02 +00:00
void* handle_1 = DriverAPI::get_nvml_handle();
DriverAPI r{};
fix missing nvml in c10/cuda/driver_api.cpp issue (#112121) Since https://github.com/pytorch/pytorch/pull/99699 introduced a dependency on nvml for oom reporting in `c10/cuda/driver_api.h`, `c10/cuda/driver_api.cpp`, and `reportProcessMemoryInfo` from `c10/cuda/CUDACachingAllocator.cpp`, we've seen failures regarding cuda expandable segments and oom reporting in NVIDIA's internal CI, specifically on Jetson devices which don't have nvml support as it is incompatible with Jetson. Example failures using the latest upstream on Orin AGX node: `python test/test_cuda.py -k test_notifies_oom` generates ``` Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/opt/pytorch/pytorch/test/test_cuda.py", line 1643, in _worker results[t] = torch.nn.functional.conv2d(results[t], weight, padding=0) RuntimeError: CUDA driver error: out of memory ``` `python test/test_cuda_expandable_segments.py` generates ``` Traceback (most recent call last): File "/opt/pytorch/pytorch/test/test_cuda_expandable_segments.py", line 12, in <module> exec(compile(open(filepath).read(), filepath, mode='exec')) File "/opt/pytorch/pytorch/test/test_cuda.py", line 66, in <module> class TestCuda(TestCase): File "/opt/pytorch/pytorch/test/test_cuda.py", line 1609, in TestCuda @unittest.skipIf(not TEST_CUDNN, 'CUDNN not available') File "/usr/local/lib/python3.10/dist-packages/torch/testing/_internal/common_utils.py", line 4628, in wrapped self._value = self._cb() File "/usr/local/lib/python3.10/dist-packages/torch/testing/_internal/common_cuda.py", line 20, in <lambda> TEST_CUDNN = LazyVal(lambda: TEST_CUDA and torch.backends.cudnn.is_acceptable(torch.tensor(1., device=CUDA_DEVICE))) RuntimeError: handle_0 INTERNAL ASSERT FAILED at "/opt/pytorch/pytorch/c10/cuda/driver_api.cpp":15, please report a bug to PyTorch. ``` This PR intends to fix this issue by adding various dlopen checks to make sure nvml actually exists, and safely fall back to using the older libcuda based features of cuda expandable segments and oom reporting if nvml is not found. Pull Request resolved: https://github.com/pytorch/pytorch/pull/112121 Approved by: https://github.com/eqy, https://github.com/ngimel, https://github.com/albanD
2023-11-02 21:28:02 +00:00
#define LOOKUP_LIBCUDA_ENTRY(name) \
r.name##_ = ((decltype(&name))dlsym(handle_0, #name)); \
TORCH_INTERNAL_ASSERT(r.name##_, "Can't find ", #name, ": ", dlerror())
fix missing nvml in c10/cuda/driver_api.cpp issue (#112121) Since https://github.com/pytorch/pytorch/pull/99699 introduced a dependency on nvml for oom reporting in `c10/cuda/driver_api.h`, `c10/cuda/driver_api.cpp`, and `reportProcessMemoryInfo` from `c10/cuda/CUDACachingAllocator.cpp`, we've seen failures regarding cuda expandable segments and oom reporting in NVIDIA's internal CI, specifically on Jetson devices which don't have nvml support as it is incompatible with Jetson. Example failures using the latest upstream on Orin AGX node: `python test/test_cuda.py -k test_notifies_oom` generates ``` Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/opt/pytorch/pytorch/test/test_cuda.py", line 1643, in _worker results[t] = torch.nn.functional.conv2d(results[t], weight, padding=0) RuntimeError: CUDA driver error: out of memory ``` `python test/test_cuda_expandable_segments.py` generates ``` Traceback (most recent call last): File "/opt/pytorch/pytorch/test/test_cuda_expandable_segments.py", line 12, in <module> exec(compile(open(filepath).read(), filepath, mode='exec')) File "/opt/pytorch/pytorch/test/test_cuda.py", line 66, in <module> class TestCuda(TestCase): File "/opt/pytorch/pytorch/test/test_cuda.py", line 1609, in TestCuda @unittest.skipIf(not TEST_CUDNN, 'CUDNN not available') File "/usr/local/lib/python3.10/dist-packages/torch/testing/_internal/common_utils.py", line 4628, in wrapped self._value = self._cb() File "/usr/local/lib/python3.10/dist-packages/torch/testing/_internal/common_cuda.py", line 20, in <lambda> TEST_CUDNN = LazyVal(lambda: TEST_CUDA and torch.backends.cudnn.is_acceptable(torch.tensor(1., device=CUDA_DEVICE))) RuntimeError: handle_0 INTERNAL ASSERT FAILED at "/opt/pytorch/pytorch/c10/cuda/driver_api.cpp":15, please report a bug to PyTorch. ``` This PR intends to fix this issue by adding various dlopen checks to make sure nvml actually exists, and safely fall back to using the older libcuda based features of cuda expandable segments and oom reporting if nvml is not found. Pull Request resolved: https://github.com/pytorch/pytorch/pull/112121 Approved by: https://github.com/eqy, https://github.com/ngimel, https://github.com/albanD
2023-11-02 21:28:02 +00:00
C10_LIBCUDA_DRIVER_API(LOOKUP_LIBCUDA_ENTRY)
#undef LOOKUP_LIBCUDA_ENTRY
if (handle_1) {
#define LOOKUP_NVML_ENTRY(name) \
r.name##_ = ((decltype(&name))dlsym(handle_1, #name)); \
TORCH_INTERNAL_ASSERT(r.name##_, "Can't find ", #name, ": ", dlerror())
fix missing nvml in c10/cuda/driver_api.cpp issue (#112121) Since https://github.com/pytorch/pytorch/pull/99699 introduced a dependency on nvml for oom reporting in `c10/cuda/driver_api.h`, `c10/cuda/driver_api.cpp`, and `reportProcessMemoryInfo` from `c10/cuda/CUDACachingAllocator.cpp`, we've seen failures regarding cuda expandable segments and oom reporting in NVIDIA's internal CI, specifically on Jetson devices which don't have nvml support as it is incompatible with Jetson. Example failures using the latest upstream on Orin AGX node: `python test/test_cuda.py -k test_notifies_oom` generates ``` Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/opt/pytorch/pytorch/test/test_cuda.py", line 1643, in _worker results[t] = torch.nn.functional.conv2d(results[t], weight, padding=0) RuntimeError: CUDA driver error: out of memory ``` `python test/test_cuda_expandable_segments.py` generates ``` Traceback (most recent call last): File "/opt/pytorch/pytorch/test/test_cuda_expandable_segments.py", line 12, in <module> exec(compile(open(filepath).read(), filepath, mode='exec')) File "/opt/pytorch/pytorch/test/test_cuda.py", line 66, in <module> class TestCuda(TestCase): File "/opt/pytorch/pytorch/test/test_cuda.py", line 1609, in TestCuda @unittest.skipIf(not TEST_CUDNN, 'CUDNN not available') File "/usr/local/lib/python3.10/dist-packages/torch/testing/_internal/common_utils.py", line 4628, in wrapped self._value = self._cb() File "/usr/local/lib/python3.10/dist-packages/torch/testing/_internal/common_cuda.py", line 20, in <lambda> TEST_CUDNN = LazyVal(lambda: TEST_CUDA and torch.backends.cudnn.is_acceptable(torch.tensor(1., device=CUDA_DEVICE))) RuntimeError: handle_0 INTERNAL ASSERT FAILED at "/opt/pytorch/pytorch/c10/cuda/driver_api.cpp":15, please report a bug to PyTorch. ``` This PR intends to fix this issue by adding various dlopen checks to make sure nvml actually exists, and safely fall back to using the older libcuda based features of cuda expandable segments and oom reporting if nvml is not found. Pull Request resolved: https://github.com/pytorch/pytorch/pull/112121 Approved by: https://github.com/eqy, https://github.com/ngimel, https://github.com/albanD
2023-11-02 21:28:02 +00:00
C10_NVML_DRIVER_API(LOOKUP_NVML_ENTRY)
#undef LOOKUP_NVML_ENTRY
}
return r;
}
} // namespace
fix missing nvml in c10/cuda/driver_api.cpp issue (#112121) Since https://github.com/pytorch/pytorch/pull/99699 introduced a dependency on nvml for oom reporting in `c10/cuda/driver_api.h`, `c10/cuda/driver_api.cpp`, and `reportProcessMemoryInfo` from `c10/cuda/CUDACachingAllocator.cpp`, we've seen failures regarding cuda expandable segments and oom reporting in NVIDIA's internal CI, specifically on Jetson devices which don't have nvml support as it is incompatible with Jetson. Example failures using the latest upstream on Orin AGX node: `python test/test_cuda.py -k test_notifies_oom` generates ``` Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/opt/pytorch/pytorch/test/test_cuda.py", line 1643, in _worker results[t] = torch.nn.functional.conv2d(results[t], weight, padding=0) RuntimeError: CUDA driver error: out of memory ``` `python test/test_cuda_expandable_segments.py` generates ``` Traceback (most recent call last): File "/opt/pytorch/pytorch/test/test_cuda_expandable_segments.py", line 12, in <module> exec(compile(open(filepath).read(), filepath, mode='exec')) File "/opt/pytorch/pytorch/test/test_cuda.py", line 66, in <module> class TestCuda(TestCase): File "/opt/pytorch/pytorch/test/test_cuda.py", line 1609, in TestCuda @unittest.skipIf(not TEST_CUDNN, 'CUDNN not available') File "/usr/local/lib/python3.10/dist-packages/torch/testing/_internal/common_utils.py", line 4628, in wrapped self._value = self._cb() File "/usr/local/lib/python3.10/dist-packages/torch/testing/_internal/common_cuda.py", line 20, in <lambda> TEST_CUDNN = LazyVal(lambda: TEST_CUDA and torch.backends.cudnn.is_acceptable(torch.tensor(1., device=CUDA_DEVICE))) RuntimeError: handle_0 INTERNAL ASSERT FAILED at "/opt/pytorch/pytorch/c10/cuda/driver_api.cpp":15, please report a bug to PyTorch. ``` This PR intends to fix this issue by adding various dlopen checks to make sure nvml actually exists, and safely fall back to using the older libcuda based features of cuda expandable segments and oom reporting if nvml is not found. Pull Request resolved: https://github.com/pytorch/pytorch/pull/112121 Approved by: https://github.com/eqy, https://github.com/ngimel, https://github.com/albanD
2023-11-02 21:28:02 +00:00
void* DriverAPI::get_nvml_handle() {
static void* nvml_hanle = dlopen("libnvidia-ml.so.1", RTLD_LAZY);
return nvml_hanle;
fix missing nvml in c10/cuda/driver_api.cpp issue (#112121) Since https://github.com/pytorch/pytorch/pull/99699 introduced a dependency on nvml for oom reporting in `c10/cuda/driver_api.h`, `c10/cuda/driver_api.cpp`, and `reportProcessMemoryInfo` from `c10/cuda/CUDACachingAllocator.cpp`, we've seen failures regarding cuda expandable segments and oom reporting in NVIDIA's internal CI, specifically on Jetson devices which don't have nvml support as it is incompatible with Jetson. Example failures using the latest upstream on Orin AGX node: `python test/test_cuda.py -k test_notifies_oom` generates ``` Traceback (most recent call last): File "/usr/lib/python3.10/threading.py", line 1016, in _bootstrap_inner self.run() File "/usr/lib/python3.10/threading.py", line 953, in run self._target(*self._args, **self._kwargs) File "/opt/pytorch/pytorch/test/test_cuda.py", line 1643, in _worker results[t] = torch.nn.functional.conv2d(results[t], weight, padding=0) RuntimeError: CUDA driver error: out of memory ``` `python test/test_cuda_expandable_segments.py` generates ``` Traceback (most recent call last): File "/opt/pytorch/pytorch/test/test_cuda_expandable_segments.py", line 12, in <module> exec(compile(open(filepath).read(), filepath, mode='exec')) File "/opt/pytorch/pytorch/test/test_cuda.py", line 66, in <module> class TestCuda(TestCase): File "/opt/pytorch/pytorch/test/test_cuda.py", line 1609, in TestCuda @unittest.skipIf(not TEST_CUDNN, 'CUDNN not available') File "/usr/local/lib/python3.10/dist-packages/torch/testing/_internal/common_utils.py", line 4628, in wrapped self._value = self._cb() File "/usr/local/lib/python3.10/dist-packages/torch/testing/_internal/common_cuda.py", line 20, in <lambda> TEST_CUDNN = LazyVal(lambda: TEST_CUDA and torch.backends.cudnn.is_acceptable(torch.tensor(1., device=CUDA_DEVICE))) RuntimeError: handle_0 INTERNAL ASSERT FAILED at "/opt/pytorch/pytorch/c10/cuda/driver_api.cpp":15, please report a bug to PyTorch. ``` This PR intends to fix this issue by adding various dlopen checks to make sure nvml actually exists, and safely fall back to using the older libcuda based features of cuda expandable segments and oom reporting if nvml is not found. Pull Request resolved: https://github.com/pytorch/pytorch/pull/112121 Approved by: https://github.com/eqy, https://github.com/ngimel, https://github.com/albanD
2023-11-02 21:28:02 +00:00
}
C10_EXPORT DriverAPI* DriverAPI::get() {
static DriverAPI singleton = create_driver_api();
return &singleton;
}
} // namespace c10::cuda
#endif