dereju derekiya commited on
Commit
aea3a77
1 Parent(s): e651fae

Create README.md (#1)

Browse files

- Create README.md (797059aa0424bae05bfdb994fe7af7ef31448bf4)


Co-authored-by: Dereje Hinsermu <derekiya@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +86 -0
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ library_name: transformers
6
+ ---
7
+
8
+ # Model Card: generate_reason
9
+
10
+ <!-- Provide a quick summary of what the model is/does. -->
11
+
12
+
13
+ ## Model Name
14
+
15
+ ## generate_reason
16
+
17
+ ### Model Description
18
+
19
+ <!-- This model represents a fine-tuned version of the facebook/bart-large model, specifically adapted for the task of reason generator by annalysing resume with job description. The model has been trained to efficiently generate concise and relevant reason from extensive resume texts and JD. The fine-tuning process has tailored the original BART model to specialize in summarization tasks based on a specific dataset.. -->
20
+ This model represents a fine-tuned version of the facebook/bart-large model, specifically adapted for the task of reason generator by annalysing resume with job description. The model has been trained to efficiently generate concise and relevant reason from extensive resume texts and JD. The fine-tuning process has tailored the original BART model to specialize in summarization tasks based on a specific dataset.
21
+
22
+ ### Model information
23
+
24
+ -**Base Model: GebeyaTalent/generate_reason**
25
+
26
+
27
+ -**Finetuning Dataset: To be made available in the future.**
28
+
29
+ ### Training Parameters
30
+
31
+ - **Evaluation Strategy: epoch:**
32
+ - **Learning Rate: 5e-5**
33
+ - **Per Device Train Batch Size: 8:**
34
+ - **Per Device Eval Batch Size: 8**
35
+ - **Weight Decay: 0.01**
36
+ - **Save Total Limit: 5**
37
+ - **Number of Training Epochs: 5**
38
+ - **Predict with Generate: True**
39
+ - **Gradient Accumulation Steps: 1**
40
+ - **Optimizer: paged_adamw_32bit**
41
+ - **Learning Rate Scheduler Type: cosine**
42
+
43
+
44
+ ## how to use
45
+
46
+ <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
47
+ **1.** Install the transformers library:
48
+
49
+ **pip install transformers**
50
+
51
+ **2.** Import the necessary modules:
52
+
53
+ import torch
54
+ from transformers import BartTokenizer, BartForConditionalGeneration
55
+
56
+ **3.** Initialize the model and tokenizer:
57
+
58
+ model_name = 'GebeyaTalent/generate_reason'
59
+ tokenizer = BartTokenizer.from_pretrained(model_name)
60
+ model = BartForConditionalGeneration.from_pretrained(model_name)
61
+
62
+ **4.** Prepare the text to generate reason:
63
+
64
+ resume = 'your resume text here"
65
+ job_description = "your job_description here"
66
+
67
+ # Concatenate the resume and job description with a delimiter
68
+ combined_text = "Resume: " + request_data["resume"] + " Job Description: " + request_data["job_description"]
69
+
70
+
71
+ inputs = tokenizer(combined_text, return_tensors="pt", truncation=True, padding="max_length", max_length=1024)
72
+
73
+ **5.** Generate reason
74
+
75
+ reason_ids = model.generate(inputs["input_ids"], num_beams=4, max_length=150, early_stopping=True)
76
+ reason = tokenizer.decode(reason_ids[0], skip_special_tokens=True)
77
+
78
+ **6.** Output the summary:
79
+
80
+ print("Reason:", reason)
81
+
82
+ ## Model Card Authors
83
+
84
+ Dereje Hinsermu
85
+
86
+ ## Model Card Contact