As of August 2023, AMD’s ROCm GPU compute software stack is available for Linux or Windows. It’s best to check the latest docs for information:
See also (2024 April):
Hardware
These are the latest officially supported cards:
- https://rocm.docs.amd.com/projects/install-on-linux/en/latest/reference/system-requirements.html
- https://rocm.docs.amd.com/projects/install-on-windows/en/latest/reference/system-requirements.html
If you have a supported family, you can usually use set
HSA_OVERRIDE_GFX_VERSION
to the closest supported version (eg,HSA_OVERRIDE_GFX_VERSION=10.3.0
) and get things working.
RDNA3 (eg 7900 XT, XTX)
As of ROCm 5.7, Radeon RX 7900 XTX, XT, and PRO W7900 are officially supported and many old hacks are no longer necessary:
- https://rocm.docs.amd.com/projects/radeon/en/latest/docs/compatibility.html
- https://are-we-gfx1100-yet.github.io/
- https://news.ycombinator.com/item?id=36574179
- I posted my 7900XT/XTX results on Reddit, some conversation here: https://www.reddit.com/r/LocalLLaMA/comments/191srof/amd_radeon_7900_xtxtx_inference_performance/
AMD APU
Performance 65W 7940HS w/ 64GB of DDR5-5600 (83GB/s theoretical memory bandwidth): https://docs.google.com/spreadsheets/d/1kT4or6b0Fedd-W_jMwYpb63e1ZR3aePczz3zlbJW-Y4/edit#gid=1041125589
- On small (7B) models that fit within the UMA VRAM, ROCm performance is very similar to my M2 MBA’s Metal performance. Inference is barely faster than CLBlast/CPU though (~10% faster).
- On a big (70B) model that doesn’t fit into allocated VRAM, the ROCm inferences slower than CPU w/ -ngl 0 (CLBlast crashes), and CPU perf is about as expected - about 1.3 t/s inferencing a Q4_K_M. Besides being slower, the ROCm version also caused amdgpu exceptions that killed Wayland 2/3 times (I’m running Linux 6.5.4, ROCm 5.6.1, mesa 23.1.8).
Note: BIOS allows me to set up to 8GB for VRAM in BIOS (UMA_SPECIFIED GART), ROCm does not support GTT (about 35GB/64GB if it did support it, which is not enough for a 70B Q4_0, not that you’d want to at those speeds).
Vulkan drivers can use GTT memory dynamically, but w/ MLC LLM, Vulkan version is 35% slower than CPU-only llama.cpp. Also, the max GART+GTT is still too small for 70B models.
- It may be possible to unlock more UMA/GART memory: https://winstonhyypia.medium.com/amd-apu-how-to-modify-the-dedicated-gpu-memory-e27b75905056
- There is custom allocator that may allow PyTorch to use GTT memory (only useful for PyTorch inferencing obviously): https://github.com/pomoke/torch-apu-helper
- A writeup of someone playing around w/ ROCm and SD on an older APU: https://www.gabriel.urdhr.fr/2022/08/28/trying-to-run-stable-diffusion-on-amd-ryzen-5-5600g/
Radeon VII
We have some previous known good memory timings for an old Radeon VII card:
RDNA3 (navi3) on Linux
Arch Linux Setup
Arch Linux setup is fairly straightforward (can be easier than the official install!) but is community supported by rocm-arch. If you’re running an Arch system already, this should be fine, but if you’re running a system dedicated to ML, then you should prefer Ubuntu.
Install ROCm:
Install conda (mamba)
Create Environment
Ubuntu LTS Setup
Ubuntu is the most well documented of the officially supported distros:
- https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/native-install/index.html
- I recommend using the latest LTS (22.04.3) with the HWE kernel
- The install documents are pretty much complete
- You can now use
apt install rocm
to install “everything” (except the drivers, you’ll still needamdgpu-dkms
first). - Be sure also to look at the “post-install instructions”
HWE Kernel
Prereqs
Install
cmath
You may run into some compile errors. You will need libstdc++-12-dev
in Ubuntu:
llama.cpp
llama.cpp has ROCm support built-in now (2023-08):
- https://github.com/ggerganov/llama.cpp/#hipblas
- You can use
LLAMA_HIP_UMA=1
for unified memory for APUs uname -a
,dkms status
andapt list | grep rocm | grep '\[installed\]'
to get version numbers of kernel and libs- OpenCL via CLBlast is a universal/easy option, but gains and should still give decent gains over CPU inference
- As of 2023-01, Vulkan support is merged. See below for testing/comparison
Let’s run some testing with TheBloke/Llama-2-7B-GGUF (Q4_0).
7900 XT + 7900 XTX used together segfaulted on b7e7982 (1787)
(tested 2024-01-08) but ran with 6db2b41a (1988)
(tested 2024-01-28)
- last tested: 2024-01-28
7900 XT:
- last tested: 2024-01-28
7900 XTX:
While the Radeon 7900 XTX has theoretically competitive memory bandwidth and compute, in practice, with ROCm 6.0, hipBLAS still falls behind cuBLAS in llama.cpp:
7900 XT | 7900 XTX | RTX 3090 | RTX 4090 | |
---|---|---|---|---|
Memory GB | 20 | 24 | 24 | 24 |
Memory BW GB/s | 800 | 960 | 936.2 | 1008 |
Memory BW % | -16.7% | 0% | -2.5% | +5.0% |
FP32 TFLOPS | 51.48 | 61.42 | 35.58 | 82.58 |
FP16 TFLOPS | 103.0 | 122.8 | 71/142* | 165.2/330.3* |
FP16 TFLOPS % | -16.1% | 0% | +15.6%* | +169.0%* |
Prompt tok/s | 2366 | 2576 | 3251 | 5415 |
Prompt % | -8.2% | 0% | +26.2% | +110.2% |
Inference tok/s | 97.2 | 119.1 | 134.5 | 158.4 |
Inference % | -18.4% | 0% | +12.9% | +33.0% |
- Tested 2024-01-28 with llama.cpp
6db2b41a (1988)
and latest ROCm (dkms amdgpu/6.3.6-1697589.22.04
,rocm 6.0.0.60000-91~22.04
) and CUDA (dkms nvidia/545.29.06, 6.7.0-arch3-1
,nvcc cuda_12.3.r12.3/compiler.33492891_0
) on similar platforms (5800X3D for Radeons, 5950X for RTXs) - RTX cards have much better FP16/BF16 Tensor FLOPS performance that the inferencing engines are taking advantage of. FP16 FLOPS (32-bit/16-bit accumulation numbers) sourced from Nvidia docs (3090, 4090_)
Vulkan and CLBlast
5800X3D CPU | 7900 XTX CLBlast | 7900 XTX Vulkan | 7900 XTX ROCm | |
---|---|---|---|---|
Prompt tok/s | 24.5 | 219 | 758 | 2550 |
Inference tok/s | 10.7 | 35.4 | 52.3 | 119.0 |
- Tested 2024-01-29 with llama.cpp
d2f650cb (1999)
and latest on a 5800X3D w/ DDR4-3600 system with CLBlastlibclblast-dev 1.5.2-2
, Vulkanmesa-vulkan-drivers 23.0.4-0ubuntu1~22.04.1
, and ROCm (dkms amdgpu/6.3.6-1697589.22.04
,rocm 6.0.0.60000-91~22.04
)
Radeon VII
The Radeon VII was a Vega 20 XT (GCN 5.1) card that was released in February 2019 at $700. It has 16GB of HDM2 memory with a 1024GB/s of memory bandwidth and 26.88 TFLOPS of FP16. Honestly, while the prefill probably doesn’t have much more that could be squeezed from it, I would expect with optimization, you would be able to double inference performance (if you could use all its memory bandwidth).
Radeon Vega VII
- Tested 2024-02-02 on a Ryzen 5 2400G system with
rocm-core 5.7.1-1
System Info
ExLlamaV2
We’ll use main
on TheBloke/Llama-2-7B-GPTQ for testing (GS128 No Act Order).
Install is straightforward:
7900 XT
7900 XTX
Running with both GPUs work, although it defaults to loading everything onto one. If you force the VRAM, interestingly, you can get batch=1 inference to perform slightly better:
The ROCm kernel is very un-optimized vs the CUDA version, but you can see while inference performance is much lower than llama.cpp, the prompt processing remains ExLlama’s strength (this is especially important for long context scenarios like long, multi-turn conversations or RAG).
7900 XT | 7900 XTX | RTX 3090 | RTX 4090 | |
---|---|---|---|---|
Memory GB | 20 | 24 | 24 | 24 |
Memory BW GB/s | 800 | 960 | 936.2 | 1008 |
FP32 TFLOPS | 51.48 | 61.42 | 35.58 | 82.58 |
FP16 TFLOPS | 103.0 | 122.8 | 35.58 | 82.58 |
Prompt tok/s | 3457 | 3928 | 5863 | 13955 |
Prompt % | -12.0% | 0% | +49.3% | +255.3% |
Inference tok/s | 57.9 | 61.2 | 116.5 | 137.6 |
Inference % | -5.4% | 0% | +90.4% | +124.8% |
- Tested 2024-01-08 with ExLlamaV2
3b0f523
and latest ROCm (dkms amdgpu/6.3.6-1697589.22.04
,rocm 6.0.0.60000-91~22.04
) and CUDA (dkms nvidia/545.29.06, 6.6.7-arch1-1
,nvcc cuda_12.3.r12.3/compiler.33492891_0
) on similar platforms (5800X3D for Radeons, 5950X for RTXs)
MLC (NOT WORKING)
Setup
- https://llm.mlc.ai/docs/install/mlc_llm.html#install-mlc-packages
- https://github.com/mlc-ai/mlc-llm/issues/1216
- https://llm.mlc.ai/docs/install/tvm.html#option-1-prebuilt-package
Make a model: https://llm.mlc.ai/docs/compilation/compile_models.html
bitsandbytes
For current status, see:
- https://github.com/TimDettmers/bitsandbytes/issues/107
- https://github.com/TimDettmers/bitsandbytes/pull/756
- https://github.com/TimDettmers/bitsandbytes/discussions/990 The most current working fork (related to the that PR):
- https://github.com/arlo-phoenix/bitsandbytes-rocm-5.6/tree/rocm
I was able to successfully build and install this on 2024-02-15:
mamba create -n bnb python=3.11 -y
mamba activate bnb
# https://pytorch.org/get-started/locally/
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm5.7
python -c "import torch; print('PyTorch version:', torch.__version__); print('CUDA available:', torch.cuda.is_available()); print('CUDA device count:', torch.cuda.device_count()); print('Current CUDA device:', torch.cuda.current_device() if torch.cuda.is_available() else 'None')"
git clone https://github.com/arlo-phoenix/bitsandbytes-rocm-5.6
cd bitsandbytes-rocm-5.6
git fetch
git branch -a
git checkout rocm
# you can use rocminfo to get your ROCM_TARGET
# you might need to modify the Makefile to set ROCM_HOME:=/opt/rocm
make hip ROCM_TARGET=gfx1100
pip install .
python -m bitsandbytes
python -c "import bitsandbytes; print(bitsandbytes.__version__)"
# You probably want these if you're testing inference
pip install transformers
pip install accelerate
xformers (NOT WORKING)
2024-02-17: The ROCM/xformers fork defaults to a main
branch, which compiles, but is basically upstream. All the work is done on branches (develop
seems to be the main one), which sadly … doesn’t compile due to mismatching header files from Composable Kernels.
Note: vLLM has it’s own 0.0.23 with a patch to install, but still dies w/ RDNA3
# xformers
git clone https://github.com/ROCm/xformers
cd xformers
git fetch
git branch -a
git checkout develop
git submodule update --init --recursive
python setup.py install
python -c 'import xformers; print(xformers.__version__)'
triton
This seems to work (2.1.0)
git clone https://github.com/ROCm/triton
cd triton/python
pip install ninja cmake
pip install -e .
python -c "import triton; print(triton.__version__)"
Flash Attention 2 (SORT OF WORKING)
This seems to work for inference (it only supports batched forward pass, not backward pass) - see the GH issue for more info. You won’t be able to train with this.
Also, this is a fork of 2.0.4 so it does not support Mistral’s Sliding Window Attention
See:
- https://github.com/ROCm/flash-attention
- howiejayz/navi_support
- https://github.com/ROCm/flash-attention/issues/27
Install:
git clone https://github.com/ROCm/flash-attention
git fetch
git branch -a
git checkout howiejay/navi_support
python setup.py install
unsloth (NOT WORKING)
Unsloth https://github.com/unslothai/unsloth depends on:
- PyTorch
- Triton
- xformers or flash attention
- bitsandbytes
In theory we have everything we need, and it will startup, however, even after you comment out the libcuda_dirs()
calls it will die:
pip install "unsloth[conda] @ git+https://github.com/unslothai/unsloth.git"
# You'll need to manually edit site-packages/unsloth/__init__.py
# comment out
# libcuda_dirs()
TensorFlow (SHOULD WORK?)
Untested, but recent reports are that it should work:
- https://www.reddit.com/r/ROCm/comments/1ahkay9/tensorflow_on_gfx1101_navi32_7800_xt/
- https://rocm.docs.amd.com/projects/install-on-linux/en/latest/how-to/3rd-party/tensorflow-install.html
- Try out: https://cprimozic.net/notes/posts/machine-learning-benchmarks-on-the-7900-xtx/
- Can run script, says it’s using ROCm Fusion, but runs on CPU?
Apparently you need to build your own TF for gfx1100
support…
- https://gist.github.com/briansp2020/1e8c3e5735087398ebfd9514f26a0007
- https://cprimozic.net/notes/posts/setting-up-tensorflow-with-rocm-on-7900-xtx/
- https://gist.github.com/BloodBlight/0d36b33d215056395f34db26fb419a63 Life is short, putting for for later
vLLM (NOT WORKING)
vLLM supports ROCm starting w/ v0.2.4, but only on MI200 cards… https://docs.vllm.ai/en/latest/getting_started/amd-installation.html#build-from-source-rocm
2024-02-17: failed to get it working on RDNA3, dumps out matrix errors
RDNA3 support should be merged in: https://github.com/vllm-project/vllm/pull/2768 Now let’s continue:
Windows
llama.cpp
For an easy time, go to llama.cpp’s release page and download a bin-win-clblast
version.
In the Windows terminal, run it with -ngl 99
to load all the layers into memory.
On a Radeon 7900XT, you should get about double the performance of CPU-only execution.
Compile for ROCm
This was last update 2023-09-03 so things might change, but here’s how I was able to get things working in Windows.
Requirements
- You’ll need Microsoft Visual Studio installed. Install it with the basic C++ environment.
- Follow AMD’s directions and install the ROCm software for Windows.
- You’ll need
git
if you want to pull the latest from the repo (you can either get the official Windows installer or use a package manager like Chocolatey tochoco install git
) - note, as an alternative, you could just download the Source code.zip from the https://github.com/ggerganov/llama.cpp/releases/
Instructions
First, launch “x64 Native Tools Command Prompt” from the Windows Menu (you can hit the Windows key and just start typing x64 and it should pop up).
That’s it, now you have compiled executables in build/bin
.
Start a new terminal to run llama.CPP
If you set just the global you may need to start a new shell before running this in the llama.cpp
checkout. You can double check it’S working by outputing the path echo %PATH%
or just running hipInfo
or another exe in the ROCm bin folder.
NOTE: If your PATH is wonky for some reason you may get missing .dll errors. You can either fix that, or if all else fails, copy the missing files from "C:\Program Files\AMD\ROCm\5.5\bin
into the build/bin
folder since life is too short.
Results
Here’s my llama-bench
results running a llama2-7b q4_0 and q4_K_M:
Unsupported Architectures
On Windows, it may not be possible to apply an HSA_OVERRIDE_GFX_VERSION
override. In that case, these instructions for compiling custom kernels may help: https://www.reddit.com/r/LocalLLaMA/comments/16d1hi0/guide_build_llamacpp_on_windows_with_amd_gpus_and/
Resources
Here’s a ROCm fork of DeepSpeed (2023-09):
2023-07 Casey Primozic did some testing/benchmarking of the 7900 XTX (TensorFlow, TinyGrad):