Edit model card

TokenBender/evolvedSeeker_1_3-GGUF

Quantized GGUF model files for evolvedSeeker_1_3 from TokenBender

Name Quant method Size
evolvedseeker_1_3.fp16.gguf fp16 2.69 GB
evolvedseeker_1_3.q2_k.gguf q2_k 631.71 MB
evolvedseeker_1_3.q3_k_m.gguf q3_k_m 704.97 MB
evolvedseeker_1_3.q4_k_m.gguf q4_k_m 873.58 MB
evolvedseeker_1_3.q5_k_m.gguf q5_k_m 1.00 GB
evolvedseeker_1_3.q6_k.gguf q6_k 1.17 GB
evolvedseeker_1_3.q8_0.gguf q8_0 1.43 GB

Original Model Card:

evolvedSeeker-1_3

EvolvedSeeker v0.0.1 (First phase)

This model is a fine-tuned version of deepseek-ai/deepseek-coder-1.3b-base on 50k instructions for 3 epochs.

I have mostly curated instructions from evolInstruct datasets and some portions of glaive coder.

Around 3k answers were modified via self-instruct.

Collaborate or Consult me - Twitter, Discord

Recommended format is ChatML, Alpaca will work but take care of EOT token

Chat Model Inference

from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("TokenBender/evolvedSeeker_1_3", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("TokenBender/evolvedSeeker_1_3", trust_remote_code=True).cuda()
messages=[
    { 'role': 'user', 'content': "write a program to reverse letters in each word in a sentence without reversing order of words in the sentence."}
]
inputs = tokenizer.apply_chat_template(messages, return_tensors="pt").to(model.device)
# 32021 is the id of <|EOT|> token
outputs = model.generate(inputs, max_new_tokens=512, do_sample=False, top_k=50, top_p=0.95, num_return_sequences=1, eos_token_id=32021)
print(tokenizer.decode(outputs[0][len(inputs[0]):], skip_special_tokens=True))

Model description

First model of Project PIC (Partner-in-Crime) in 1.3B range. Almost all the work is pending right now for this model hence v0.0.1 image/png

Intended uses & limitations

Superfast Copilot Run near lossless quantized in 1G RAM. Useful for code dataset curation and evaluation.

Limitations - This is a smol model, so smol brain, may have crammed a few things. Reasoning tests may fail beyond a certain point.

Training procedure

SFT

Training results

Humaneval Score - 68.29% image/png

Framework versions

  • Transformers 4.35.2
  • Pytorch 2.0.1
  • Datasets 2.15.0
  • Tokenizers 0.15.0
Downloads last month
100
GGUF
Model size
1.35B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for afrideva/evolvedSeeker_1_3-GGUF

Quantized
(4)
this model