metadata
library_name: transformers
tags: []
ELM Mistral-7B-v0.1 Model Card
Erasing Conceptual Knoweldge from Language Models,
Rohit Gandikota, Sheridan Feucht, Samuel Marks, David Bau
How to use
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_id = "baulab/elm-Mistral-7B-v0.1"
device = 'cuda:0'
dtype = torch.float32
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=dtype)
model = model.to(device)
model.requires_grad_(False)
tokenizer = AutoTokenizer.from_pretrained(model_id, use_fast=False)
# generate text
inputs = tokenizer(prompt, return_tensors='pt', padding=True)
inputs = inputs.to(device).to(dtype)
outputs = model.generate(**inputs,
max_new_tokens=300,
do_sample=True,
top_p=.95,
temperature=1.2)
outputs = tokenizer.batch_decode(outputs, skip_special_tokens = True)
print(outputs[0])
Model Sources [optional]
- Repository: https://github.com/rohitgandikota/erasing-llm
- Paper [optional]: https://arxiv.org/pdf/2410.02760
- Project [optional]: https://elm.baulab.info
Citation [optional]
BibTeX:
@article{gandikota2024elm,
title={Erasing Conceptual Knowledge from Language Models},
author={Rohit Gandikota and Sheridan Feucht and Samuel Marks and David Bau},
journal={arXiv preprint arXiv:2410.02760},
year={2024}
}