Comet models
Collection
6 items
•
Updated
Finetuned GPT-2 on the large version of ATOMIC ja using a causal language modeling (CLM) objective. The original version and the large version of ATOMIC ja were introduced in this paper and in this paper, respectively.
You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility:
>>> from transformers import pipeline, set_seed
>>> generator = pipeline('text-generation', model='nlp-waseda/comet-v2-gpt2-small-japanese')
>>> set_seed(42)
>>> generator('X が 副業 を 始めるxEffect', max_length=30, num_return_sequences=5, do_sample=True)
[{'generated_text': 'X が 副業 を 始めるxEffect X が 収入 を 得る'},
{'generated_text': 'X が 副業 を 始めるxEffect X が 時間 を 失う'},
{'generated_text': 'X が 副業 を 始めるxEffect X が 儲かる'},
{'generated_text': 'X が 副業 を 始めるxEffect X が 稼ぐ'},
{'generated_text': 'X が 副業 を 始めるxEffect X が 稼げる ように なる'}]
The texts are segmented into words using Juman++ and tokenized using SentencePiece.
The model achieves the following results:
BLEU | BERTScore |
---|---|
- | - |
@InProceedings{ide_nlp2023_event,
author = "井手竜也 and 村田栄樹 and 堀尾海斗 and 河原大輔 and 山崎天 and 李聖哲 and 新里顕大 and 佐藤敏紀",
title = "人間と言語モデルに対するプロンプトを用いたゼロからのイベント常識知識グラフ構築",
booktitle = "言語処理学会第29回年次大会",
year = "2023",
url = "https://www.anlp.jp/proceedings/annual_meeting/2023/pdf_dir/B2-5.pdf"
note = "in Japanese"
}
@InProceedings{murata_nlp2023,
author = "村田栄樹 and 井手竜也 and 榮田亮真 and 河原大輔 and 山崎天 and 李聖哲 and 新里顕大 and 佐藤敏紀",
title = "大規模言語モデルによって構築された常識知識グラフの拡大と低コストフィルタリング",
booktitle = "言語処理学会第29回年次大会",
year = "2023",
url = "https://www.anlp.jp/proceedings/annual_meeting/2023/pdf_dir/B9-1.pdf"
note = "in Japanese"
}