Spaces:
Sleeping
Sleeping
File size: 1,124 Bytes
b8c24aa 32a6937 9b7a2bc b8c24aa 25dfa7b b8c24aa 32a6937 b8c24aa 44df802 9b7a2bc b8c24aa 32a6937 0e8d94e 32a6937 0e8d94e 32a6937 0e8d94e 32a6937 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 |
---
title: Chat with LLMs
emoji: 🤖💬
colorFrom: purple
colorTo: blue
sdk: gradio
sdk_version: 4.26.0
app_file: app.py
pinned: true
short_description: 'Chat with LLMs'
---
## Running Locally
1. Check pre-conditions:
- [Git Large File Storage (LFS)](https://git-lfs.com/) must have been installed.
- Run `python --version` to make sure you're running Python version 3.10 or above.
- The latest PyTorch must have been installed. Here is a sample `conda` command for Linix/WSL2:
```
conda install -y pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia
```
2. Clone the repo
```
git lfs install
git clone https://huggingface.co/spaces/inflaton-ai/llm-qa-bench
```
3. Install packages
```
pip install -r requirements.txt
```
4. Set up your environment variables
- By default, environment variables are loaded from `.env.example` file
- If you don't want to use the default settings, copy `.env.example` into `.env`. Your can then update it for your local runs.
5. Run automated test:
```
python qa_chain_test.py
```
6. Start the local server at `http://localhost:7860`:
```
python app.py
```
|