Edit model card

42dot_LLM-SFT-1.3B_GGUF

Description

This repository contains the GGUF conversion and the most relevant quantizations of 42dot's 42dot_LLM-SFT-1.3B model - ready to be used with llama.cpp and similar applications.

Files

In order to allow for fine-tuning (the model has the required LLaMA architecture) the original GGUF conversion has been made available

From this file, the following quantizations were derived:

(tell me if you need more)

Usage Details

Any technical details can be found on the original model card The most important ones for using this model are

  • context length is 4096
  • there does not seem to be a specific prompt structure - just provide the text you want to be completed

Text Completion with LLaMA.cpp

For simple inferencing, use a command similar to

./main -m 42dot_LLM-SFT-1.3B-Q8_K.gguf --temp 0 --top-k 4 --prompt "who was Joseph Weizenbaum?"

Text Tokenization with LLaMA.cpp

To get a list of tokens, use a command similar to

./tokenization -m 42dot_LLM-SFT-1.3B-Q8_K.gguf --prompt "who was Joseph Weizenbaum?"

Embeddings Calculation with LLaMA.cpp

Text embeddings are calculated with a command similar to

./embedding -m 42dot_LLM-SFT-1.3B-Q8_K.gguf --prompt "who was Joseph Weizenbaum?"

License

The original model "is licensed under the Creative Commons Attribution-NonCommercial 4.0 (CC BY-NC 4.0)" - for that reason, the same license was also chosen for the conversions found in this repository.

So, in order to be fair and give credits to whom they belong:

  • the original model was created and published by 42dot
  • besides quantization, no changes were applied to the model itself
Downloads last month
94
GGUF
Model size
1.44B params
Architecture
llama

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model's library. Check the docs .