Edit model card
Configuration Parsing Warning: In adapter_config.json: "peft.task_type" must be a string

This repository contains the LoRA adapter weights from the fine-tuning of the Llama 3 (8B) model on patent documents using masked next token prediction (MNTP). MNTP is the first step in the adaptation of the base model for embedding generation following the llm2vec approach.

Framework versions

  • PEFT 0.12.0
Downloads last month
10
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for saroyehun/Llama3-8B-Instruct-mntp-patent

Adapter
(617)
this model