msmarco-distilbert-dot-v5

sentence-transformers
Similitud de oraciones

Este es un modelo de sentence-transformers: Mapea oraciones y párrafos a un espacio vectorial denso de 768 dimensiones y fue diseñado para búsqueda semántica. Ha sido entrenado en 500K pares (consulta, respuesta) del conjunto de datos MS MARCO. Para una introducción a la búsqueda semántica, consulte: SBERT.net - Semantic Search

Como usar

Usage (Sentence-Transformers)

Using this model becomes easy when you have sentence-transformers installed:

pip install -U sentence-transformers

Then you can use the model like this:

from sentence_transformers import SentenceTransformer, util

query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]

# Load the model
model = SentenceTransformer('sentence-transformers/msmarco-distilbert-dot-v5')

# Encode query and documents
query_emb = model.encode(query)
doc_emb = model.encode(docs)

# Compute dot score between query and all document embeddings
scores = util.dot_score(query_emb, doc_emb)[0].cpu().tolist()

# Combine docs & scores
doc_score_pairs = list(zip(docs, scores))

# Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)

# Output passages & scores
print("Query:", query)
for doc, score in doc_score_pairs:
   print(score, doc)

Usage (HuggingFace Transformers)

Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the correct pooling-operation on-top of the contextualized word embeddings.

from transformers import AutoTokenizer, AutoModel
import torch

# Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
   token_embeddings = model_output.last_hidden_state
   input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
   return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)

# Encode text
def encode(texts):
   # Tokenize sentences
   encoded_input = tokenizer(texts, padding=True, truncation=True, return_tensors='pt')

   # Compute token embeddings
   with torch.no_grad():
       model_output = model(**encoded_input, return_dict=True)

   # Perform pooling
   embeddings = mean_pooling(model_output, encoded_input['attention_mask'])

   return embeddings

# Sentences we want sentence embeddings for
query = "How many people live in London?"
docs = ["Around 9 Million people live in London", "London is known for its financial district"]

# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained("sentence-transformers/msmarco-distilbert-dot-v5")
model = AutoModel.from_pretrained("sentence-transformers/msmarco-distilbert-dot-v5")

# Encode query and docs
query_emb = encode(query)
doc_emb = encode(docs)

# Compute dot score between query and all document embeddings
scores = torch.mm(query_emb, doc_emb.transpose(0, 1))[0].cpu().tolist()

# Combine docs & scores
doc_score_pairs = list(zip(docs, scores))

# Sort by decreasing score
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)

# Output passages & scores
print("Query:", query)
for doc, score in doc_score_pairs:
   print(score, doc)

Funcionalidades

Transformador
Espacio vectorial denso de 768 dimensiones
Búsqueda semántica
Entrenado en 500K pares del conjunto de datos MS MARCO

Casos de uso

Búsqueda semántica
Extracción de características
Inferencia de incrustaciones de texto
Análisis semántico