Edit model card

The goal

The goal of the model to provide a fine-tuned Phi2 (https://huggingface.co/microsoft/phi-2) model that has knowledge about the Vintage NEXTSTEP Operating System, and able to answer question in the topic.

Details

The model has trained on 35439 Question Answer pairs automatically generated from the NEXTSTEP 3.3 System Administrator documentation. For the training data generation locally running Q8 Quantized Orca2 13B (https://huggingface.co/TheBloke/Orca-2-13B-GGUF) model has been used. The training data generation was completely unsuperwised, with only some sanity check (like ignore data chunks contains less than 100 tokens). The maximum token size for Orca2 is 4096 so a simple rule of split chunks over 3500 tokens (considering propt instructions) has been used. Chunking did not consider context (text data might split within the context). Evaluation set has been generated similar method on 1% of the raw data with LLama2 chat (https://huggingface.co/TheBloke/Llama-2-13B-chat-GGUF).

Trained locally on 2x3090 GPU with vanila DDP with HuggingFace Accelerate for 50 Epoch. As I wanted to add new knowledge to the base model r=128 and lora_alpha=128 has been used -> LoRA weights were 3.5% of the base model.

Sample code

Chat with model sample code: https://github.com/csabakecskemeti/ai_utils/blob/main/generate.py

For the best result instruct the model to not refer to other chapers but collect the whole data, like: "Give me a complete answer do not refer to other chapters but collect the information from them. How to setup a local network in Openstep OS?"

Downloads last month
5
Safetensors
Model size
2.78B params
Tensor type
FP16
·
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Collection including DevQuasar/vintage-nextstep_os_systemadmin-ft-phi2