Undergraduate/ML & DL

[GPU] 다수의 GPU 중 원하는 GPU 타겟팅하기

unnjena 2020. 5. 13. 09:57
  • GPU가 여러대인 서버 환경에서 협업하는 경우 꼭 알아두어야 함
  • 여러 방법이 있지만 Jupyter notebook 환경에서 적합하도록 구현
import torch

# 현재 Setup 되어있는 device 확인
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
print ('Available devices ', torch.cuda.device_count())
print ('Current cuda device ', torch.cuda.current_device())
print(torch.cuda.get_device_name(device))

Available devices 3

Current cuda device 2

Tesla V100-SXM2-32GB

print('------------변경 후------------')
# GPU 할당 변경하기
GPU_NUM = 2 # 원하는 GPU 번호 입력
device = torch.device(f'cuda:{GPU_NUM}' if torch.cuda.is_available() else 'cpu')
torch.cuda.set_device(device) # change allocation of current GPU
print ('Current cuda device ', torch.cuda.current_device()) # check

# Additional Infos
if device.type == 'cuda':
    print(torch.cuda.get_device_name(GPU_NUM))
    print('Memory Usage:')
    print('Allocated:', round(torch.cuda.memory_allocated(GPU_NUM)/1024**3,1), 'GB')
    print('Cached:   ', round(torch.cuda.memory_cached(GPU_NUM)/1024**3,1), 'GB')

------------변경 후------------

Current cuda device 2

Tesla V100-SXM2-32GB

Memory Usage:

Allocated: 0.0 GB

Cached: 0.0 GB