Spaces:
Running
The results of ZeroGPU are not correct with the SAME code
The results on my local machine are shown below.
When the demo is deployed on Hugging Face with ZeroGPU, the results are shown as follows.
The only difference between the two environments is the creation of a dummy decorator, which allows it to run on my local machine.
# Create a dummy decorator for Non-ZeroGPU environments
if os.environ.get("SPACES_ZERO_GPU") is not None:
import spaces
else:
class spaces:
@staticmethod
def GPU(func):
# This is a dummy wrapper that just calls the function.
def wrapper(*args, **kwargs):
return func(*args, **kwargs)
return wrapper
It may be because of Different Gradio versions.
Related spaces: https://huggingface.co/spaces/hzxie/city-dreamer
Related discussions: https://huggingface.co/spaces/zero-gpu-explorers/README/discussions/110
Which Gradio version should I use? Both the local and Hugging Face environments are running Gradio version 4.4.4, and all other packages are the same across both environments.
I'm currently using CUDA 12.2 on both Hugging Face and my local machine. The reason I opted for CUDA 12.2 is that neither CUDA 11.8 nor CUDA 12.1 work properly on Hugging Face. Hereβs why:
- CUDA 11.8 requires GCC 11.2, but Hugging Face uses a Docker image based on
debian:latest
, which includes GCC version 11.3 or higher. Downgrading GCC on Debian is much more difficult compared to Ubuntu, making it impossible to compile CUDA extensions with CUDA 11.8. - CUDA 12.1 has a bug that causes compile errors, which was resolved in CUDA 12.2.
I strongly suspect that there should be some bugs in ZeroGPU environment. Therefore, I decided to switch back to legacy runtime environments without ZeroGPU. However, you can still test the code in the ZeroGPU environment from here.
The code works both locally and on Hugging Face, and it can be used to reproduce the error.