--- license: mit language: - en --- only for FFmpeg ### Direct Use ```python import torch from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig model_name = "meta-llama/Llama-2-13b-chat-hf" adapters_name = 'wj2003/Pongo-13B' model = AutoModelForCausalLM.from_pretrained( model_name, load_in_4bit=True, torch_dtype=torch.bfloat16, device_map="auto", max_memory={i: '24000MB' for i in range(torch.cuda.device_count())}, quantization_config=BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_compute_dtype=torch.bfloat16, bnb_4bit_use_double_quant=True, bnb_4bit_quant_type='nf4' ), ) model = PeftModel.from_pretrained(model, adapters_name) tokenizer = AutoTokenizer.from_pretrained(adapters_name) prompt = "find potential security issues in the following code. If it has vulnerability, " \ "output: Vulnerabilities " \ "Detected: type of vulnerability. otherwise output.Here is the complete code: " # Provide your code code="" formatted_prompt = ( f"{prompt + code}" ) inputs = tokenizer(formatted_prompt,return_tensors="pt").to("cuda:0") outputs = model.generate(inputs=inputs.input_ids, max_new_tokens=1024) ```