tuhailong's picture
Update README.md
3ac6695
|
raw
history blame
953 Bytes
metadata
language: zh
tags:
  - cross-encoder
datasets:
  - dialogue

Data

train data is similarity sentence data from E-commerce dialogue, about 50w sentence pairs.

Model

model created by sentence-tansformers,model struct is cross-encoder, pretrained model is hfl/chinese-roberta-wwm-ext. This model structure is as same as tuhailong/cross_encoder_roberta-wwm-ext_v0,the difference is changing the order of input sentences and put them in train dataset, the performance is better in my dataset.

Usage

>>> from sentence_transformers.cross_encoder import CrossEncoder
>>> model = CrossEncoder(model_save_path, device="cuda", max_length=64)
>>> sentences = ["今天天气不错", "今天心情不错"]
>>> score = model.predict([sentences])
>>> print(score[0])

Code

train code from https://github.com/TTurn/cross-encoder