nb-sbert-base / README.md
pere's picture
Update README.md
239b44c
|
raw
history blame
7.04 kB
metadata
pipeline_tag: sentence-similarity
tags:
  - sentence-transformers
  - feature-extraction
  - sentence-similarity
  - transformers
widget:
  - source_sentence: This is a Norwegian boy
    sentences:
      - Dette er en norsk gutt
      - This is an English boy
      - This is a dog
    example_title: Cross Language
  - source_sentence: Det er noen dyr utenfor vinduet
    sentences:
      -  utsiden kan jeg høre noen hunder
      - Noen mennesker prater utenfor vinduet
      - Alle burde ha kjæledyr
    example_title: Paraphrases
  - source_sentence: En kvinne sitter i en stol
    sentences:
      - A woman is sitting in a chair
      - Hun slapper av og leser i en bok
      - Hun løper maraton
    example_title: Paraphrases across language

NB-SBERT

Sentence-transformers model trained on the machine translated mnli-dataset starting from nb-bert-base.

The model maps sentences & paragraphs to a 768 dimensional dense vector space. This vector can be used for tasks like clustering and semantic search. Below we give some examples on how to use the model in different framework. The easiest way is to simply measure the cosine distance between two sentences. Sentences that are close to each other in meaning, will have a small cosine distance and a similarity close to 1. The model is trained in a way where we try to keep this distance also between languages. Ideally an English-Norwegian sentence pair should have high similarity.

Keyword Extraction

The model can be used for extracting keywords from the text. The basic technique is to find the words that are most similar to the document. There are various frameworks for doing this. An easy way is to use keyBERT. This example shows how this can be used.

pip install keybert
from keybert import KeyBERT
from sentence_transformers import SentenceTransformer
sentence_model = SentenceTransformer("NbAiLab/nb-sbert")
kw_model = KeyBERT(model=sentence_model)

doc = """
De første nasjonale bibliotek har sin opprinnelse i kongelige samlinger eller en annen framstående myndighet eller statsoverhode. 
Et av de første planene for et nasjonalbibliotek i England ble fremmet av den walisiske matematikeren og mystikeren John Dee som 
i 1556 presenterte en visjonær plan om et nasjonalt bibliotek for gamle bøker, manuskripter og opptegnelser for dronning Maria I 
av England. Hans forslag ble ikke tatt til følge.
"""
kw_model.extract_keywords(doc, stop_words=None)

# [('nasjonalbibliotek', 0.5242), ('bibliotek', 0.4342), ('samlinger', 0.3334), ('statsoverhode', 0.33), ('manuskripter', 0.3061)]

The keyBERT Homepage gives several other examples on how this can be used. For instance how it can be combined with stop words, how longer phrases can be extracted and how it directly can output the highlighted text.

Keyword Extraction

[ToDo - Per Egil - https://github.com/MaartenGr/BERTopic]

Similarity Search

[TODO - Javier or Per Egil]

Embeddings and Sentence Similarity (Sentence-Transformers)

Using this model becomes easy when you have sentence-transformers installed:

pip install -U sentence-transformers

Then you can use the model like this:

from sentence_transformers import SentenceTransformer, util
sentences = ["This is a Norwegian boy", "Dette er en norsk gutt"]

model = SentenceTransformer('NbAiLab/nb-sbert')
embeddings = model.encode(sentences)
print(embeddings)

# Compute cosine-similarities with sentence transformers
cosine_scores = util.cos_sim(embeddings[0],embeddings[1])
print(cosine_scores)

# Compute cosine-similarities with SciPy
from scipy import spatial
scipy_cosine_scores = 1 - spatial.distance.cosine(embeddings[0],embeddings[1])
print(scipy_cosine_scores)

# Both should give 0.8250 in the example above.

Embeddings and Sentence Similarity (HuggingFace Transformers)

Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.

from transformers import AutoTokenizer, AutoModel
import torch


#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
    token_embeddings = model_output[0] #First element of model_output contains all token embeddings
    input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
    return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)


# Sentences we want sentence embeddings for
sentences = ["This is a Norwegian boy", "Dette er en norsk gutt"]

# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('NbAiLab/nb-sbert')
model = AutoModel.from_pretrained('NbAiLab/nb-sbert')

# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')

# Compute token embeddings
with torch.no_grad():
    model_output = model(**encoded_input)

# Perform pooling. In this case, mean pooling.
embeddings = mean_pooling(model_output, encoded_input['attention_mask'])

print(embeddings)

# Compute cosine-similarities with SciPy
from scipy import spatial
scipy_cosine_scores = 1 - spatial.distance.cosine(embeddings[0],embeddings[1])
print(scipy_cosine_scores)

# This should give 0.8250 in the example above.

Evaluation and Parameters

Evaluaton

[Insert some numbers here, Rolv-Arild?]

Training

The model was trained with the parameters:

DataLoader:

sentence_transformers.datasets.NoDuplicatesDataLoader.NoDuplicatesDataLoader of length 16471 with parameters:

{'batch_size': 32}

Loss:

sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss with parameters:

{'scale': 20.0, 'similarity_fct': 'cos_sim'}

Parameters of the fit()-Method:

{
    "epochs": 1,
    "evaluation_steps": 1647,
    "evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
    "max_grad_norm": 1,
    "optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
    "optimizer_params": {
        "lr": 2e-05
    },
    "scheduler": "WarmupLinear",
    "steps_per_epoch": null,
    "warmup_steps": 1648,
    "weight_decay": 0.01
}

Full Model Architecture

SentenceTransformer(
  (0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel 
  (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)

Citing & Authors