Spaces:
Paused
English | ็ฎไฝไธญๆ | ๆฅๆฌ่ช | ํ๊ตญ์ด
Document | Roadmap | Twitter | Discord | Demo
๐ Table of Contents
- ๐ก What is RAGFlow?
- ๐ฎ Demo
- ๐ Latest Updates
- ๐ Key Features
- ๐ System Architecture
- ๐ฌ Get Started
- ๐ง Configurations
- ๐ ๏ธ Build from source
- ๐ ๏ธ Launch service from source
- ๐ Documentation
- ๐ Roadmap
- ๐ Community
- ๐ Contributing
๐ก What is RAGFlow?
RAGFlow is an open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.
๐ฎ Demo
Try our demo at https://demo.ragflow.io.
๐ฅ Latest Updates
- 2024-09-13 Adds search mode for knowledge base Q&A.
- 2024-09-09 Adds a medical consultant agent template.
- 2024-08-22 Support text to SQL statements through RAG.
- 2024-08-02 Supports GraphRAG inspired by graphrag and mind map.
- 2024-07-23 Supports audio file parsing.
- 2024-07-08 Supports workflow based on Graph.
- 2024-06-27 Supports Markdown and Docx in the Q&A parsing method, extracting images from Docx files, extracting tables from Markdown files.
- 2024-05-23 Supports RAPTOR for better text retrieval.
๐ Key Features
๐ญ "Quality in, quality out"
- Deep document understanding-based knowledge extraction from unstructured data with complicated formats.
- Finds "needle in a data haystack" of literally unlimited tokens.
๐ฑ Template-based chunking
- Intelligent and explainable.
- Plenty of template options to choose from.
๐ฑ Grounded citations with reduced hallucinations
- Visualization of text chunking to allow human intervention.
- Quick view of the key references and traceable citations to support grounded answers.
๐ Compatibility with heterogeneous data sources
- Supports Word, slides, excel, txt, images, scanned copies, structured data, web pages, and more.
๐ Automated and effortless RAG workflow
- Streamlined RAG orchestration catered to both personal and large businesses.
- Configurable LLMs as well as embedding models.
- Multiple recall paired with fused re-ranking.
- Intuitive APIs for seamless integration with business.
๐ System Architecture
๐ฌ Get Started
๐ Prerequisites
- CPU >= 4 cores
- RAM >= 16 GB
- Disk >= 50 GB
- Docker >= 24.0.0 & Docker Compose >= v2.26.1
If you have not installed Docker on your local machine (Windows, Mac, or Linux), see Install Docker Engine.
๐ Start up the server
Ensure
vm.max_map_count
>= 262144:To check the value of
vm.max_map_count
:$ sysctl vm.max_map_count
Reset
vm.max_map_count
to a value at least 262144 if it is not.# In this case, we set it to 262144: $ sudo sysctl -w vm.max_map_count=262144
This change will be reset after a system reboot. To ensure your change remains permanent, add or update the
vm.max_map_count
value in /etc/sysctl.conf accordingly:vm.max_map_count=262144
Clone the repo:
$ git clone https://github.com/infiniflow/ragflow.git
Build the pre-built Docker images and start up the server:
Running the following commands automatically downloads the dev version RAGFlow Docker image. To download and run a specified Docker version, update
RAGFLOW_VERSION
in docker/.env to the intended version, for exampleRAGFLOW_VERSION=v0.11.0
, before running the following commands.$ cd ragflow/docker $ chmod +x ./entrypoint.sh $ docker compose up -d
The core image is about 9 GB in size and may take a while to load.
Check the server status after having the server up and running:
$ docker logs -f ragflow-server
The following output confirms a successful launch of the system:
____ ______ __ / __ \ ____ _ ____ _ / ____// /____ _ __ / /_/ // __ `// __ `// /_ / // __ \| | /| / / / _, _// /_/ // /_/ // __/ / // /_/ /| |/ |/ / /_/ |_| \__,_/ \__, //_/ /_/ \____/ |__/|__/ /____/ * Running on all addresses (0.0.0.0) * Running on http://127.0.0.1:9380 * Running on http://x.x.x.x:9380 INFO:werkzeug:Press CTRL+C to quit
If you skip this confirmation step and directly log in to RAGFlow, your browser may prompt a
network abnormal
error because, at that moment, your RAGFlow may not be fully initialized.In your web browser, enter the IP address of your server and log in to RAGFlow.
With the default settings, you only need to enter
http://IP_OF_YOUR_MACHINE
(sans port number) as the default HTTP serving port80
can be omitted when using the default configurations.In service_conf.yaml, select the desired LLM factory in
user_default_llm
and update theAPI_KEY
field with the corresponding API key.See llm_api_key_setup for more information.
The show is now on!
๐ง Configurations
When it comes to system configurations, you will need to manage the following files:
- .env: Keeps the fundamental setups for the system, such as
SVR_HTTP_PORT
,MYSQL_PASSWORD
, andMINIO_PASSWORD
. - service_conf.yaml: Configures the back-end services.
- docker-compose.yml: The system relies on docker-compose.yml to start up.
You must ensure that changes to the .env file are in line with what are in the service_conf.yaml file.
The ./docker/README file provides a detailed description of the environment settings and service configurations, and you are REQUIRED to ensure that all environment settings listed in the ./docker/README file are aligned with the corresponding configurations in the service_conf.yaml file.
To update the default HTTP serving port (80), go to docker-compose.yml and change 80:80
to <YOUR_SERVING_PORT>:80
.
Updates to all system configurations require a system reboot to take effect:
$ docker-compose up -d
๐ ๏ธ Build from source
To build the Docker images from source:
$ git clone https://github.com/infiniflow/ragflow.git
$ cd ragflow/
$ docker build -t infiniflow/ragflow:dev .
$ cd ragflow/docker
$ chmod +x ./entrypoint.sh
$ docker compose up -d
๐ ๏ธ Launch service from source
To launch the service from source:
Clone the repository:
$ git clone https://github.com/infiniflow/ragflow.git $ cd ragflow/
Create a virtual environment, ensuring that Anaconda or Miniconda is installed:
$ conda create -n ragflow python=3.11.0 $ conda activate ragflow $ pip install -r requirements.txt
# If your CUDA version is higher than 12.0, run the following additional commands: $ pip uninstall -y onnxruntime-gpu $ pip install onnxruntime-gpu --extra-index-url https://aiinfra.pkgs.visualstudio.com/PublicPackages/_packaging/onnxruntime-cuda-12/pypi/simple/
Copy the entry script and configure environment variables:
# Get the Python path: $ which python # Get the ragflow project path: $ pwd
$ cp docker/entrypoint.sh . $ vi entrypoint.sh
# Adjust configurations according to your actual situation (the following two export commands are newly added): # - Assign the result of `which python` to `PY`. # - Assign the result of `pwd` to `PYTHONPATH`. # - Comment out `LD_LIBRARY_PATH`, if it is configured. # - Optional: Add Hugging Face mirror. PY=${PY} export PYTHONPATH=${PYTHONPATH} export HF_ENDPOINT=https://hf-mirror.com
Launch the third-party services (MinIO, Elasticsearch, Redis, and MySQL):
$ cd docker $ docker compose -f docker-compose-base.yml up -d
Check the configuration files, ensuring that:
- The settings in docker/.env match those in conf/service_conf.yaml.
- The IP addresses and ports for related services in service_conf.yaml match the local machine IP and ports exposed by the container.
Launch the RAGFlow backend service:
$ chmod +x ./entrypoint.sh $ bash ./entrypoint.sh
Launch the frontend service:
$ cd web $ npm install --registry=https://registry.npmmirror.com --force $ vim .umirc.ts # Update proxy.target to http://127.0.0.1:9380 $ npm run dev
Deploy the frontend service:
$ cd web $ npm install --registry=https://registry.npmmirror.com --force $ umi build $ mkdir -p /ragflow/web $ cp -r dist /ragflow/web $ apt install nginx -y $ cp ../docker/nginx/proxy.conf /etc/nginx $ cp ../docker/nginx/nginx.conf /etc/nginx $ cp ../docker/nginx/ragflow.conf /etc/nginx/conf.d $ systemctl start nginx
๐ Documentation
๐ Roadmap
See the RAGFlow Roadmap 2024
๐ Community
๐ Contributing
RAGFlow flourishes via open-source collaboration. In this spirit, we embrace diverse contributions from the community. If you would like to be a part, review our Contribution Guidelines first.