Edit model card

Synatra-Mixtral-8x7B

Synatra-Mixtral-8x7B

Synatra-Mixtral-8x7B is a fine-tuned version of the Mixtral-8x7B-Instruct-v0.1 model using Korean datasets.

This model features overwhelmingly superior comprehension and inference capabilities and is licensed under apache-2.0.

Join Our Discord

Server Link

License

OPEN, Apache-2.0.

Model Details

Base Model
mistralai/Mixtral-8x7B-Instruct-v0.1

Trained On
A100 80GB * 6

Instruction format

It follows Alpaca format.

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{input}

### Response:
{output}

Model Benchmark

TBD

Implementation Code

from transformers import AutoModelForCausalLM, AutoTokenizer

device = "cuda" # the device to load the model onto

model = AutoModelForCausalLM.from_pretrained("maywell/Synatra-Mixtral-8x7B")
tokenizer = AutoTokenizer.from_pretrained("maywell/Synatra-Mixtral-8x7B")

messages = [
    {"role": "user", "content": "μ•„μΈμŠˆνƒ€μΈμ˜ μƒλŒ€μ„±μ΄λ‘ μ— λŒ€ν•΄μ„œ μžμ„Ένžˆ μ„€λͺ…ν•΄μ€˜."},
]

encodeds = tokenizer.apply_chat_template(messages, return_tensors="pt")

model_inputs = encodeds.to(device)
model.to(device)

generated_ids = model.generate(model_inputs, max_new_tokens=1000, do_sample=True)
decoded = tokenizer.batch_decode(generated_ids)
print(decoded[0])

Author's Message

This model's training got sponsered by no one but support from people around Earth.

Support Me

Contact Me on Discord - is.maywell

Follow me on twitter: https://twitter.com/stablefluffy

Downloads last month
2,388
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.