Has anyone run the inference server on an M1?
#13
by
prompterminal
- opened
python -m riffusion.server --port 3013 --host 127.0.0.1 gives me a bunch of errors. I'm on python 3.9.1. I don't think I can go lower with an M1. Help?
This comment has been hidden
The answer is that there is no GPU with my M1 macbook, use runpod or some other service.
MPS is now supported, see the README: https://github.com/riffusion/riffusion
hmartiros
changed discussion status to
closed