• MalReynolds
    link
    fedilink
    English
    arrow-up
    10
    ·
    edit-2
    4 months ago

    Yes, llama.cpp and derivates, stable diffusion, they also run on ROCm. LLM fine-tuning is CUDA as well, ROCm implementations not so much for this, but coming along.