Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've had an AMD card for the last 5 years, so I kinda just tuned out of local LLM releases because AMD seemed to abandon rocm for my card (6900xt) - Is AMD capable of anything these days?
 help



> I've had an AMD card for the last 5 years, so I kinda just tuned out of local LLM releases because AMD seemed to abandon rocm for my card (6900xt) - Is AMD capable of anything these days?

Sure. Llama.cpp will happily run these kinds of LLMs using either HIP or Vulcan.

Vulkan is easier to get going using the Mesa OSS drivers under Linux, HIP might give you slightly better performance.


The vulkan backend for llama.cpp isn't that far behind rocm for pp and tp speeds

I think AMD just add support of rocm to rdna2 recently? I can run torch and aisudio with it just fine.

They also finally fix all ai related stuff building on windows, so you are no longer limited to linux for these.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: