File size: 1,568 Bytes
1a44ae5
 
 
 
 
 
 
382681d
 
1a44ae5
 
cfdd176
 
 
 
 
 
 
 
 
5d1c9d2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
382681d
c2e0952
5d1c9d2
 
 
 
c2e0952
5d1c9d2
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
---
datasets:
- fajrikoto/id_liputan6
language:
- id
base_model:
- cahya/bert2bert-indonesian-summarization
library: transformers
pipeline_tag: Summarization
---

# Fine-Tuned BERT2BERT Summarization Model

This model is fine-tuned based on the original [BERT2BERT Indonesian Summarization](https://huggingface.co/cahya/bert2bert-indonesian-summarization) model.

### Fine-Tuned Dataset:
- **Dataset**: [Liputan6_ID](https://huggingface.co/datasets/fajrikoto/id_liputan6)
- **Task**: Summarization

This model was fine-tuned using the [Liputan6_ID](https://huggingface.co/datasets/fajrikoto/id_liputan6) dataset, which contains Indonesian news articles. The model is optimized for summarizing domain-specific texts from the Liputan6 dataset.

## Code Sample

```python
from transformers import BertTokenizer, EncoderDecoderModel

tokenizer = BertTokenizer.from_pretrained("rowjak/bert-indonesian-news-summarization")
tokenizer.bos_token = tokenizer.cls_token
tokenizer.eos_token = tokenizer.sep_token
model = EncoderDecoderModel.from_pretrained("rowjak/bert-indonesian-news-summarization")

# 
ARTICLE = ""

# generate summary
input_ids = tokenizer.encode(ARTICLE, return_tensors='pt')
summary_ids = model.generate(input_ids,
            max_length=125, 
            num_beams=2,
            repetition_penalty=2.5, 
            length_penalty=1.0, 
            early_stopping=True,
            no_repeat_ngram_size=2,
            use_cache=True)

summary_text = tokenizer.decode(summary_ids[0], skip_special_tokens=True)
print(summary_text)
```

Output:

```
---
```