--- license: apache-2.0 language: - en pipeline_tag: text-generation tags: - llm - ggml --- # GGML converted versions of [Mosaic's](https://huggingface.co/mosaicml) MPT Models MPT-7B is a decoder-style transformer pretrained from scratch on 1T tokens of English text and code. This model was trained by [MosaicML](https://www.mosaicml.com). MPT-7B is part of the family of MosaicPretrainedTransformer (MPT) models, which use a modified transformer architecture optimized for efficient training and inference. ## Converted Models: | Name | Based on | Type | Container | |-|-|-|-| | [mpt-7b-f16.bin](https://huggingface.co/Rustformers/mpt-7b-ggml/blob/main/mpt-7b-f16.bin) | [mpt-7b](https://huggingface.co/mosaicml/mpt-7b) | fp16 | GGML | | [mpt-7b-q4_0.bin](https://huggingface.co/Rustformers/mpt-7b-ggml/blob/main/mpt-7b-q4_0.bin) | [mpt-7b](https://huggingface.co/mosaicml/mpt-7b) | int4 | GGML | | [mpt-7b-q4_0-ggjt.bin](https://huggingface.co/Rustformers/mpt-7b-ggml/blob/main/mpt-7b-q4_0-ggjt.bin) | [mpt-7b](https://huggingface.co/mosaicml/mpt-7b) | int4 | GGJT | | [mpt-7b-chat-f16.bin](https://huggingface.co/Rustformers/mpt-7b-ggml/blob/main/mpt-7b-chat-f16.bin) | [mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat) | fp16 | GGML | | [mpt-7b-chat-q4_0.bin](https://huggingface.co/Rustformers/mpt-7b-ggml/blob/main/mpt-7b-chat-q4_0.bin) | [mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat) | int4 | GGML | | [mpt-7b-chat-q4_0-ggjt.bin](https://huggingface.co/Rustformers/mpt-7b-ggml/blob/main/mpt-7b-chat-q4_0-ggj.bin) | [mpt-7b-chat](https://huggingface.co/mosaicml/mpt-7b-chat) | int4 | GGJT | | [mpt-7b-instruct-f16.bin](https://huggingface.co/Rustformers/mpt-7b-ggml/blob/main/mpt-7b-instruct-f16.bin) | [mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) | fp16 | GGML | | [mpt-7b-instruct-q4_0.bin](https://huggingface.co/Rustformers/mpt-7b-ggml/blob/main/mpt-7b-instruct-q4_0.bin) | [mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) | int4 | GGML | | [mpt-7b-instruct-q4_0-ggjt.bin](https://huggingface.co/Rustformers/mpt-7b-ggml/blob/main/mpt-7b-instruct-q4_0-ggjt.bin) | [mpt-7b-instruct](https://huggingface.co/mosaicml/mpt-7b-instruct) | int4 | GGJT | | [mpt-7b-storywriter-f16.bin](https://huggingface.co/Rustformers/mpt-7b-ggml/blob/main/mpt-7b-f16.bin) | [mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) | fp16 | GGML | | [mpt-7b-storywriter-q4_0.bin](https://huggingface.co/Rustformers/mpt-7b-ggml/blob/main/mpt-7b-storywriter-q4_0.bin) | [mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) | int4 | GGML | | [mpt-7b-storywriter-q4_0-ggjt.bin](https://huggingface.co/Rustformers/mpt-7b-ggml/blob/main/mpt-7b-storywriter-q4_0-ggjt.bin) | [mpt-7b-storywriter](https://huggingface.co/mosaicml/mpt-7b-storywriter) | int4 | GGJT | ⚠️Caution⚠️: mpt-7b-storywriter is still under development! ## Usage ### Python via [llm-rs](https://github.com/LLukas22/llm-rs-python): #### Installation Via pip: `pip install llm-rs` #### Run inference ```python from llm_rs import AutoModel #Load the model, define any model you like from the list above as the `model_file` model = AutoModel.from_pretrained("Rustformers/mpt-7b-ggml",model_file="mpt-7b-q4_0-ggjt.bin") #Generate print(model.generate("The meaning of life is")) ``` ### Rust via [Rustformers/llm](https://github.com/rustformers/llm): #### Installation ``` git clone --recurse-submodules git@github.com:rustformers/llm.git cargo build --release ``` #### Run inference ``` cargo run --release -- mpt infer -m path/to/model.bin -p "Tell me how cool the Rust programming language is:" ``` ### C via [GGML](https://github.com/ggerganov/ggml) The `GGML` example only supports the ggml container type! #### Installation ``` git clone https://github.com/ggerganov/ggml cd ggml mkdir build && cd build cmake .. make -j4 mpt ``` #### Run inference ``` ./bin/mpt -m path/to/model.bin -p "The meaning of life is" ```