--- language: - en license: mit library_name: transformers tags: - axolotl - finetune - dpo - microsoft - phi - pytorch - phi-3 - nlp - code - chatml base_model: microsoft/Phi-3-mini-4k-instruct datasets: - MaziyarPanahi/truthy-dpo-v0.1-axolotl model_name: Phi-3-mini-4k-instruct-v0.1 pipeline_tag: text-generation inference: false model_creator: MaziyarPanahi quantized_by: MaziyarPanahi --- Phi-3 Logo # MaziyarPanahi/Phi-3-mini-4k-instruct-v0.1 This model is a fine-tune (DPO) of `meta-llama/Meta-Llama-3-70B-Instruct` model. # ⚡ Quantized GGUF coming soon # 🏆 [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) coming soon # Prompt Template This model uses `ChatML` prompt template: ``` <|im_start|>system {System} <|im_end|> <|im_start|>user {User} <|im_end|> <|im_start|>assistant {Assistant} ```` # How to use You can use this model by using `MaziyarPanahi/Phi-3-mini-4k-instruct-v0.1` as the model name in Hugging Face's transformers library. ```python from transformers import AutoModelForCausalLM, AutoTokenizer, TextStreamer from transformers import pipeline import torch model_id = "MaziyarPanahi/Phi-3-mini-4k-instruct-v0.1" model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True, # attn_implementation="flash_attention_2" ) tokenizer = AutoTokenizer.from_pretrained( model_id, trust_remote_code=True ) streamer = TextStreamer(tokenizer) messages = [ {"role": "system", "content": "You are a pirate chatbot who always responds in pirate speak!"}, {"role": "user", "content": "Who are you?"}, ] terminators = [ tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|im_end|>"), tokenizer.convert_tokens_to_ids("<|assistant|>"), tokenizer.convert_tokens_to_ids("<|end|>") ] pipe = pipeline( "text-generation", model=model, tokenizer=tokenizer, ) generation_args = { "max_new_tokens": 500, "return_full_text": False, "temperature": 0.0, "do_sample": False, "streamer": streamer, "eos_token_id": terminators, } output = pipe(messages, **generation_args) print(output[0]['generated_text']) ```