Setup Fooocus AI on Ubuntu 24.04 LTS (AMD GPU)

Fooocus is a powerful text to image generation tool that can be found at https://github.com/lllyasviel/Fooocus.
If you have a PC with Ubuntu 24.04 LTS and an AMD GPU, for example, the Radeon RX 7700 XT, you could easily run AI image generation tools like Fooocus AI.
In this article, I will show you how to run Fooocus AI on your PC locally. But before you run the commands below, make sure to install the right Python version and Git. Check out this article on how to do that.
You also need a python virtual environment with the ROCM version of Torch by AMD, I explained that here.
Installation
Open a terminal by pressing the Windows button. Then type ‘cmd’ and press Enter. Go to the home directory of the logged user:
cd ~
Download the Git repository of Fooocus AI by running:
git clone https://github.com/lllyasviel/Fooocus
After cloning the repository, copy the virtual environment by hitting the following commands:
cd Fooocus
cp -a ../ai-venv/venv .
After that, activate the venv:
source venv/bin/activate
Now install the other Python packages:
pip install -r requirements_versions.txt
Check the GFX version of your AMD GPU:
rocminfo | grep "\-gfx"
The output will be something like this:
Name: amdgcn-amd-amdhsa--gfx1101
Now use the last 4 numbers divided by the dots to use the right version as HSA_OVERRIDE_GFX_VERSION
, for example in my case:
HSA_OVERRIDE_GFX_VERSION=11.0.1; python entry_with_update.py --listen
Now open your browser and go to generate images at http://localhost:7865/.
If you do not want Fooocus WebUI to run publicly, only on localhost, you can remove the --listen
tag.
Running the application
The installation steps above are only necessary the first time before running the AI tool. Once the virtual environment is configured and installed, you can just use the following commands to get everything up and running, after opening a terminal:
cd Fooocus
source venv/bin/activate
HSA_OVERRIDE_GFX_VERSION=11.0.1; python entry_with_update.py --listen
Common issues
When getting the following error:
Warning: Ran out of memory when regular VAE decoding, retrying with tiled VAE decoding.
Add the parameter --vae-in-fp16
to the previous command:
python entry_with_update.py --listen --vae-in-fp16
Member discussion