Edit model card

CTranslate2 int8 version of turbcat 8b

This is a int8_float16 quantization of turbcat 8b
See more on CTranslate2: Docs | Github

This model and it's dataset was created by Kaltcit, an admin of the Exllama Discord server.

This model was converted to ct2 format using the following commnd:

ct2-transformers-converter --model kat_turbcat --output_dir turbcat-ct2 --quantization int8_float16 --low_cpu_mem_usage

no converstion needed using the model from this repository as it is already in ct2 format.

Downloads last month
8
Inference Examples
Inference API (serverless) does not yet support CTranslate2 models for this pipeline type.

Model tree for Anthonyg5005/turbcat-instruct-8b-int8-ct2

Finetuned
(5)
this model