--- base_model: mzbac/Phi-3-mini-4k-grammar-correction inference: true license: mit model_creator: mzbac model_name: Phi-3-mini-4k-grammar-correction pipeline_tag: text-generation quantized_by: afrideva tags: - gguf - ggml - quantized --- # Phi-3-mini-4k-grammar-correction-GGUF Quantized GGUF model files for [Phi-3-mini-4k-grammar-correction](https://huggingface.co/mzbac/Phi-3-mini-4k-grammar-correction) from [mzbac](https://huggingface.co/mzbac) ## Original Model Card: # Usage ```python from transformers import AutoTokenizer, AutoModelForCausalLM import torch model_id = "mzbac/Phi-3-mini-4k-grammar-correction" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained( model_id, torch_dtype=torch.bfloat16, device_map="auto", ) messages = [ { "role": "user", "content": "Please correct, polish, or translate the text delimited by triple backticks to standard English.", }, { "role": "user", "content": "Text=```neither 经理或员工 has been informed about the meeting```", }, ] input_ids = tokenizer.apply_chat_template( messages, add_generation_prompt=True, return_tensors="pt" ).to(model.device) terminators = [tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<|end|>")] outputs = model.generate( input_ids, max_new_tokens=256, eos_token_id=terminators, do_sample=True, temperature=0.1, ) response = outputs[0] print(tokenizer.decode(response)) # <|user|> Please correct, polish, or translate the text delimited by triple backticks to standard English.<|end|><|assistant|> # <|user|> Text=```neither 经理或员工 has been informed about the meeting```<|end|> # <|assistant|> Output=Neither the manager nor the employee has been informed about the meeting.<|end|> ```