Model Card for Model ID
This model is optimized for plant science by continuing pertaining on over 1.5 million plant science academic articles based on LLaMa-2-7b-base. And it undergoes further instruction tuning to make it follow instructions.
Developed by: [UCSB]
Language(s) (NLP): [More Information Needed]
License: [More Information Needed]
Finetuned from model [optional]: [LLaMa-2]
Paper [optional]: [https://arxiv.org/pdf/2401.01600.pdf]
Demo [optional]: [More Information Needed]
How to Get Started with the Model
from transformers import LlamaTokenizer, LlamaForCausalLM
import torch
tokenizer = LlamaTokenizer.from_pretrained("Xianjun/PLLaMa-7b-instruct")
model = LlamaForCausalLM.from_pretrained("Xianjun/PLLaMa-7b-instruct").half().to("cuda")
instruction = "How to ..."
batch = tokenizer(instruction, return_tensors="pt", add_special_tokens=False).to("cuda")
with torch.no_grad():
output = model.generate(**batch, max_new_tokens=512, temperature=0.7, do_sample=True)
response = tokenizer.decode(output[0], skip_special_tokens=True)
Citation
If you find PLLaMa useful in your research, please cite the following paper:
@inproceedings{Yang2024PLLaMaAO,
title={PLLaMa: An Open-source Large Language Model for Plant Science},
author={Xianjun Yang and Junfeng Gao and Wenxin Xue and Erik Alexandersson},
year={2024},
url={https://api.semanticscholar.org/CorpusID:266741610}
}
- Downloads last month
- 572
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.