Files
Hari Krishna Sai Kodali 539ba711b0 add device generalization support for distributed tests (#165067)
## MOTIVATION
To generalize Distributed test cases for non-CUDA devices

## CHANGES
- Replaced hard coded device/backends with torch.accelerator.current_accelerator() and dist.get_default_backend_for_device
- Use DistributedTestBase instead of MultiProcessTestCase to use common utilities
- Remove instantiate_device_tests and make use of torch.accelerator.current_accelerator for test/distributed/test_c10d_object_collectives.py
- fix deterministic context issue for non-cuda devices in test/distributed/optim/test_zero_redundancy_optimizer.py
- use torch.accelerator.device_count() for multi-gpu check in torch/testing/_internal/distributed/_tensor/common_dtensor.py

Pull Request resolved: https://github.com/pytorch/pytorch/pull/165067
Approved by: https://github.com/albanD
2025-12-01 06:45:09 +00:00
..