I’m looking to buy a new GPU. My main use case will be training and running neural nets (tensorflow+pytorch); gaming isn’t really a priority.

Thing is, I use wayland (via sway), and so I’d really prefer to get an AMD GPU. Nvidia doesn’t seem very linux friendly at the moment, especially when it comes to wayland unfortunately.

On the other hand, Nvidia seems to be the clear frontrunner right now when it comes to NN acceleration. I’m worried that if I got an AMD GPU to accelerate my NN work, I’d just be wasting my money.

What do you all think?

Edit: I’ve used GPUs to accelerate NN models in the past, but they weren’t my own, they were provided by my uni’s research infra and/or google collab. So this would be the first time I’d be using my own GPU hardware for this purpose.

  • ShittyKopper [they/them]@lemmy.w.on-t.work
    link
    fedilink
    arrow-up
    9
    ·
    edit-2
    1 year ago

    Get something new enough and continue getting something new enough when AMD pushes them out. The drivers suck for anything older than an RX580, and things like Blender require even newer GPUs despite the hardware being more than capable.

    Run Arch and use the ROCm’d PyTorch from the repos. Those packagers know what they’re doing.

    Other than that, expect everything premade to be made for CUDA (and therefore unusable). There are some tools like https://github.com/ROCm-Developer-Tools/HIPIFY but they aren’t “there”.

    Source: Been running Stable Diffusion on an RX580.

    • leakybits@lemmy.worldOP
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      Thanks! Sounds doable but definitely frustrating… I’m surprised this is the state of things at the moment. I mean, when you buy a CPU, you don’t really think about whether your choice limits you in some ways. But with a GPU, it’s a big consideration.

      • meteokr@community.adiquaints.moe
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        Yeah GPUs never got standardized like x86 did from the old IBM machine days. GPUs are still operating on the mindset of “specific hardware” rather than something generic. If GPUs could be programmed on as easily as CPUs we could target something like vulkan for ML.

        Even ARM faces similar, but different problems of the lack of standard boot methods.

  • auth@lemmy.ml
    link
    fedilink
    arrow-up
    3
    ·
    1 year ago

    Checkout geohot’s latest video about AMD GPUs… Not very favorable

  • empireOfLove@lemmy.one
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    the unfortunate truth is that NVidia has a complete stranglehold on the compute market. They recognized the capabilities of massively parallel compute early on and pushed CUDA super super hard to any organization doing compute. And it worked- CUDA is much easier to implement than openCL, and was released two years earlier too, so everyone ended up standardizing on it. They are currently reaping the benefits of that monopolization through their now huge enterprise GPGPU market and can basically piss down the backs of consumers and competitors alike without repercussions. OpenCL and AMD’s implementation was a day late and harder to implement…

    Do not buy AMD if you need to do any kind of compute- whether it be rendering ala Blender/AE, accelerated engineering CAD workflows, or big data handling. No tools are designed around anything but CUDA, and it sucks because Jensen is a greedy asshole, but you gotta pay your dues.

  • MigratingtoLemmy@lemmy.world
    link
    fedilink
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    Hi, I do not know much about GPUs and ML. My apologies for not being able to answer your question, but I’d like to know what you’re trying to achieve running said models. Is ML a hobby of yours?

    • leakybits@lemmy.worldOP
      link
      fedilink
      arrow-up
      0
      ·
      1 year ago

      Cheers for the reply. I’m doing a masters in machine intelligence, so I work with various kinds of ML models. And yeah it’s a hobby too, I like playing around with LLMs and seeing what I can do with them.

      • mack123@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Edit: Wrote this on mobile. The mobile U/I is not always clear as to the source magazine where the post came from, so I missed the Linux in there. Things are not as dire on Linux as on Windows for AMD, so my assessment may be a bit pessimistic. With AMD’s focus on the data centre for machine learning, the linux driver stack seems fairly well supported.

        I spent the last few days getting stable defusion and pytorch working on my Radeon 6800 XT in windows. The machineml distribution of stable diffusion runs at about 1/4 of the speed of raw rocm when I compare it to the shark tooling, which supports rocm via docker on windows.

        Expect tooling to be clinky and that you will need to compile everything yourself on linux. Prebuilt stuff will all be for Nvidia.

        Amd is pushing hard into the ai space, but aiming at datacenter users. They are rumoured to be building rocm for their windows drivers, but when that will ship is anyone’s guess.

        So right now, if you need to hit the ground running for your academic work, I would recommend NVidia, as much as it pains me, a long time AMD user.

  • LinusWorks4Mo@kbin.social
    link
    fedilink
    arrow-up
    2
    ·
    edit-2
    1 year ago

    stable diffusion runs great on a 7900xtx via pytorch and rocm5.5, but you may have to compile pytorch 2.0.1. manually. but with pytorch/rocm:latest docker this is fairly easy, look for instructions to install automatic1111, they can be generally applied to other stacks