Spaces:
Sleeping
Sleeping
title: Chat with LLMs | |
emoji: 🤖💬 | |
colorFrom: purple | |
colorTo: blue | |
sdk: gradio | |
sdk_version: 4.26.0 | |
app_file: app.py | |
pinned: true | |
short_description: 'Chat with LLMs' | |
## Running Locally | |
1. Check pre-conditions: | |
- [Git Large File Storage (LFS)](https://git-lfs.com/) must have been installed. | |
- Run `python --version` to make sure you're running Python version 3.10 or above. | |
- The latest PyTorch must have been installed. Here is a sample `conda` command for Linix/WSL2: | |
``` | |
conda install -y pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia | |
``` | |
2. Clone the repo | |
``` | |
git lfs install | |
git clone https://huggingface.co/spaces/inflaton-ai/llm-qa-bench | |
``` | |
3. Install packages | |
``` | |
pip install -r requirements.txt | |
``` | |
4. Set up your environment variables | |
- By default, environment variables are loaded from `.env.example` file | |
- If you don't want to use the default settings, copy `.env.example` into `.env`. Your can then update it for your local runs. | |
5. Run automated test: | |
``` | |
python qa_chain_test.py | |
``` | |
6. Start the local server at `http://localhost:7860`: | |
``` | |
python app.py | |
``` | |