You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm attempting to run and reproduce the results of the code provided in this repository, specifically the implementation related to PowerSGD. To ensure a smooth and accurate reproduction, could you please provide a detailed example or guide that includes the following information?
Hardware Platform Details:
Number and type of GPUs used.
Any specific hardware requirements or configurations.
Software Environment:
The version of PyTorch and CUDA.
Any other dependencies or libraries required to run the code.
Execution Instructions:
Detailed commands to launch the training process, especially if it involves distributed training using python -m torch.distributed.launch or tochrun.
Additionally, it appears that certain parts of the code might require adjustments to work correctly with the distributed launch utility. Specifically, the references to the rank variable in the following lines seem to be problematic when launching with python -m torch.distributed.launch:
These lines of code seem to directly access a rank variable which may not be properly initialized in a distributed training context initiated by torch.distributed.launch.
Could you please clarify these aspects or suggest any necessary modifications to successfully run the distributed training as intended?
Thank you very much for your assistance and for sharing your work. I'm looking forward to successfully reproducing the results and exploring the capabilities of PowerSGD.
Best regards,
Lichen
The text was updated successfully, but these errors were encountered:
Hello,
I'm attempting to run and reproduce the results of the code provided in this repository, specifically the implementation related to PowerSGD. To ensure a smooth and accurate reproduction, could you please provide a detailed example or guide that includes the following information?
Hardware Platform Details:
Software Environment:
Execution Instructions:
python -m torch.distributed.launch
ortochrun
.Additionally, it appears that certain parts of the code might require adjustments to work correctly with the distributed launch utility. Specifically, the references to the
rank
variable in the following lines seem to be problematic when launching withpython -m torch.distributed.launch
:These lines of code seem to directly access a
rank
variable which may not be properly initialized in a distributed training context initiated bytorch.distributed.launch
.Could you please clarify these aspects or suggest any necessary modifications to successfully run the distributed training as intended?
Thank you very much for your assistance and for sharing your work. I'm looking forward to successfully reproducing the results and exploring the capabilities of PowerSGD.
Best regards,
Lichen
The text was updated successfully, but these errors were encountered: