Edit model card

MeowGPT Readme

Overview

MeowGPT, developed by CutyCat2000, is a language model based on Llama with the checkpoint version 3. Trained on the OpenOrca dataset, this model is designed to generate text in a conversational manner and can be used for various natural language processing tasks.

Usage

Loading the Model

To use MeowGPT, you can load it via the transformers library in Python using the following code:

from transformers import LlamaTokenizer, AutoModelForCausalLM, AutoTokenizer

tokenizer = LlamaTokenizer.from_pretrained("cutycat2000/MeowGPT-2")
model = AutoModelForCausalLM.from_pretrained("cutycat2000/MeowGPT-2")

Example Prompt

An example of how to prompt the model for generating text:

prompt = "<s> [|User|] Hello World </s>[|Assistant|]"

The <s> and </s> are start and end tokens.

About the Model

  • Base Model: Llama
  • Checkpoint Version: 2
  • Dataset Used: OpenOrca

Citation

If you use MeowGPT in your research or projects, please consider citing CutyCat2000 and the relevant resources associated with the OpenOrca dataset.

Disclaimer

Please note that while MeowGPT is trained to assist in generating text based on given prompts, it may not always provide accurate or contextually appropriate responses. It's recommended to review and validate the generated content before usage in critical applications.

For more information or support, refer to the transformers library documentation or CutyCat2000's resources.

Downloads last month
549
Safetensors
Model size
3.02B params
Tensor type
FP16
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train cutycat2000/MeowGPT-2

Space using cutycat2000/MeowGPT-2 1