• semi [he/him]@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    edit-2
    12 days ago

    For inference (running previously-trained models that need lots of RAM), the desktop could be useful, but I would be surprised if training anything bigger than toy examples on this hardware would make sense because I expect compute performance to be limited.

    Does anyone here have practical recent experience with ROCm and how it compares with the far-more-dominant CUDA? I would imagine that compatibility is much better now that most models are using PyTorch and that is supported, but what is the performance compared to a dedicated Nvidia GPU?

    • geneva_convenience@lemmy.ml
      link
      fedilink
      arrow-up
      0
      ·
      12 days ago

      ROCM is complete garbage. AMD has an event every year that “Pytorch works now!” and it never does.

      ZLUDA is supposedly a good alternative to ROCM but I have not tried it.