--- license: llama2 --- ## Installation from source ```bash git clone https://github.com/foundation-model-stack/fms-extras cd fms-extras pip install -e . ``` ## Description This model is intended to be used as an accelerator for [granite 7B (instruct lab)](https://huggingface.co/instructlab/granite-7b-lab) and takes inspiration from the Medusa speculative decoding architecture. This accelerator modifies the MLP into a multi-stage MLP, where each stage predicts a single token in the draft based on both a state vector and sampled token from the prior stage (the base model can be considered stage 0). The state vector from the base model provides contextual information to the accelerator, while conditioning on prior sampled tokens allows it to produce higher-quality draft n-grams. Note: The underlying MLP speculator is a generic architecture that can be trained with any generative model to accelerate inference. Training is light-weight and can be completed in only a few days depending on base model size and speed. ## Repository Links 1. [Paged Attention KV-Cache / Speculator](https://github.com/foundation-model-stack/fms-extras) 2. [Production Server with speculative decoding](https://github.com/IBM/text-generation-inference.git) 3. [Speculator training](https://github.com/foundation-model-stack/fms-fsdp/pull/35) ## Samples _Note: For all samples, your environment must have access to cuda_ ### Production Server Sample *To try this out running in a production-like environment, please use the pre-built docker image:* #### Setup ```bash HF_HUB_CACHE=/hf_hub_cache chmod a+w $HF_HUB_CACHE HF_HUB_TOKEN="your huggingface hub token" TGIS_IMAGE=quay.io/wxpe/text-gen-server:main.ee927a4 docker pull $TGIS_IMAGE # optionally download llama-2-13b-chat if the weights do not already exist docker run --rm \ -v $HF_HUB_CACHE:/models \ -e HF_HUB_CACHE=/models \ -e TRANSFORMERS_CACHE=/models \ $TGIS_IMAGE \ text-generation-server download-weights \ instructlab/granite-7b-lab \ --token $HF_HUB_TOKEN # optionally download the speculator model if the weights do not already exist docker run --rm \ -v $HF_HUB_CACHE:/models \ -e HF_HUB_CACHE=/models \ -e TRANSFORMERS_CACHE=/models \ $TGIS_IMAGE \ text-generation-server download-weights \ ibm/granite-7b-lab-accelerator \ --token $HF_HUB_TOKEN # note: if the weights were downloaded separately (not with the above commands), please place them in the HF_HUB_CACHE directoy and refer to them with /models/ docker run -d --rm --gpus all \ --name my-tgis-server \ -p 8033:8033 \ -v $HF_HUB_CACHE:/models \ -e HF_HUB_CACHE=/models \ -e TRANSFORMERS_CACHE=/models \ -e MODEL_NAME=instructlab/granite-7b-lab \ -e SPECULATOR_NAME=ibm/granite-7b-lab-accelerator \ -e FLASH_ATTENTION=true \ -e PAGED_ATTENTION=true \ -e DTYPE=float16 \ $TGIS_IMAGE # check logs and wait for "gRPC server started on port 8033" and "HTTP server started on port 3000" docker logs my-tgis-server -f # get the client sample (Note: The first prompt will take longer as there is a warmup time) conda create -n tgis-client-env python=3.11 conda activate tgis-client-env git clone --branch main --single-branch https://github.com/IBM/text-generation-inference.git cd text-generation-inference/integration_tests make gen-client pip install . --no-cache-dir ``` #### Run Sample ```bash python sample_client.py ``` _Note: first prompt may be slower as there is a slight warmup time_ ### Minimal Sample *To try this out with the fms-native compiled model, please execute the following:* #### Install ```bash git clone https://github.com/foundation-model-stack/fms-extras (cd fms-extras && pip install -e .) pip install transformers==4.35.0 sentencepiece numpy ``` #### Run Sample ##### batch_size=1 (compile + cudagraphs) ```bash MODEL_PATH=/path/to/instructlab/granite-7b-lab python fms-extras/scripts/paged_speculative_inference.py \ --variant=ibm.7b_instruct_lab \ --model_path=$MODEL_PATH \ --model_source=hf \ --tokenizer=$MODEL_PATH \ --speculator_path=ibm/granite-7b-lab-accelerator \ --speculator_source=hf \ --top_k_tokens_per_head=4,3,2,2,2 \ --compile \ --compile_mode=reduce-overhead ``` ##### batch_size=1 (compile) ```bash MODEL_PATH=/path/to/instructlab/granite-7b-lab python fms-extras/scripts/paged_speculative_inference.py \ --variant=ibm.7b_instruct_lab \ --model_path=$MODEL_PATH \ --model_source=hf \ --tokenizer=$MODEL_PATH \ --speculator_path=ibm/granite-7b-lab-accelerator \ --speculator_source=hf \ --top_k_tokens_per_head=4,3,2,2,2 \ --compile \ ``` ##### batch_size=4 (compile) ```bash MODEL_PATH=/path/to/instructlab/granite-7b-lab python fms-extras/scripts/paged_speculative_inference.py \ --variant=ibm.7b_instruct_lab \ --model_path=$MODEL_PATH \ --model_source=hf \ --tokenizer=$MODEL_PATH \ --speculator_path=ibm/granite-7b-lab-accelerator \ --speculator_source=hf \ --top_k_tokens_per_head=4,3,2,2,2 \ --batch_input \ --compile \ ``` Sample code can be found [here](https://github.com/foundation-model-stack/fms-extras/blob/main/scripts/paged_speculative_inference.py)