File size: 1,318 Bytes
7eb1488 f103047 fb77c59 f103047 fb77c59 f103047 fb77c59 f103047 fb77c59 f103047 fb77c59 f103047 fb77c59 f103047 af606f3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 |
---
language:
- zh
pipeline_tag: sentence-similarity
tags:
- PEG
- feature-extraction
- sentence-similarity
- transformers
license: apache-2.0
library_name: transformers
---
## Model Details
We propose the PEG model (a Progressively Learned Textual Embedding), which progressively adjusts the weights of samples contributing to the loss within an extremely large batch, based on the difficulty levels of negative samples.
we have amassed an extensive collection of over 110 million data, spanning a wide range of fields such as general knowledge, finance, tourism, medicine, and more.
## Usage (HuggingFace Transformers)
Install transformers:
```
pip install transformers
```
Then load model and predict:
```python
from transformers import AutoModel, AutoTokenizer
import torch
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('TownsWu/PEG')
model = AutoModel.from_pretrained('TownsWu/PEG')
sentences = ['如何更换花呗绑定银行卡', '花呗更改绑定银行卡']
# Tokenize sentences
inputs = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
last_hidden_state = model(**inputs, return_dict=True).last_hidden_state
embeddings = last_hidden_state[:, 0]
print("embeddings:")
print(embeddings)
``` |