• Scrubbles@poptalk.scrubbles.tech
    link
    fedilink
    English
    arrow-up
    61
    arrow-down
    1
    ·
    7 months ago

    I know it’s a small set, but for gaming and is honestly king. Unless you want the absolute “I’m willing to pay double the cost for 5% more performance” top of the line, amd is just great.

    For AI and compute… They’re far behind. CUDA just wins. I hope a joint standard will be coming up soon, but until then Nvidia wins

        • gramathy@lemmy.ml
          link
          fedilink
          English
          arrow-up
          4
          arrow-down
          3
          ·
          7 months ago

          The only thing it’s missing is dedicated video decode hardware (which is mostly a convenience) and an equivalent to shadow play. Otherwise it’s a great alternative to a 4080/S

            • PenguinTD@lemmy.ca
              link
              fedilink
              English
              arrow-up
              3
              ·
              7 months ago

              You can even skip the whole suite if you don’t need the AMD per game driver tweaks. OBS now come with direct AMD av1 support and also can record HDR content.(which relive can’t do.)

    • aardA
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      7 months ago

      For AI and compute… They’re far behind. CUDA just wins. I hope a joint standard will be coming up soon, but until then Nvidia wins

      I got a W6800 recently. I know a nvidia model of the same generation would be faster for AI - but that thing is fast enough to run stable diffusion variants with high resolution pictures locally without getting too annoyed.

      • crispyflagstones@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        edit-2
        7 months ago

        The completely different software stack is a killer. It’s not that you can’t find versions of a model to run, but almost everything that hits the GPU for compute is going to be targeting CUDA, not RocM. From a compatibility standpoint alone this killed AMD for me. I just do not want to spend my time fighting the stack to get these models running.

        • WormFood@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          7 months ago

          on the one hand, cuda is vendor lock-in and if we’d all just agreed on an open standard decades ago then we wouldn’t be in this mess

          but on the other hand, rocm is crap and adaptivecpp is very half baked right now, at least in my limited experience

          • crispyflagstones@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            7 months ago

            Yeah, it’s not that I like this state of affairs, but right now the vendor lock-in is so one-sided that it’s hard to say there’s a viable alternative to CUDA. I hope that changes one day.

        • aardA
          link
          fedilink
          English
          arrow-up
          1
          ·
          7 months ago

          Admittedly I’m just toying around for entertainment purposes - but I didn’t really have any problems of getting anything I wanted to try out with rocm support. Bigger annoyance was different projects targetting specific distributions or specific software versions (mostly ancient python), but as I’m doing everything in containers anyway that also was manageable.

    • MonkderDritte@feddit.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      7 months ago

      I know it’s a small set, but for gaming and is honestly king.

      I feel like the usecases for GPU in industry are more than AI.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      2
      ·
      7 months ago

      Or the rise of dedicated NPUs, but that will likely take even more time (speaking of regular consumers here).