E5-large-v2

intfloat
Similitud de oraciones

Embeddings de texto por preentrenamiento contrastivo débilmente supervisado. Liang Wang, Nan Yang, Xiaolong Huang, Binxing Jiao, Linjun Yang, Daxin Jiang, Rangan Majumder, Furu Wei, arXiv 2022. Este modelo tiene 24 capas y el tamaño de los embeddings es 1024.

Como usar

A continuación, se muestra un ejemplo para codificar consultas y pasajes del conjunto de datos de clasificación de pasajes MS-MARCO.

import torch.nn.functional as F
from torch import Tensor
from transformers import AutoTokenizer, AutoModel

def average_pool(last_hidden_states: Tensor, attention_mask: Tensor) -> Tensor:
	last_hidden = last_hidden_states.masked_fill(~attention_mask[..., None].bool(), 0.0)
	return last_hidden.sum(dim=1) / attention_mask.sum(dim=1)[..., None]

# Cada texto de entrada debe comenzar con "query: " o "passage: ".
# Para tareas distintas de la recuperación, simplemente puedes usar el prefijo "query: ".
input_texts = ['query: how much protein should a female eat', 'query: summit define', "passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.", "passage: Definition of summit for English Language Learners. : 1  the highest point of a mountain : the top of a mountain. : 2  the highest level. : 3  a meeting or series of meetings between the leaders of two or more governments."]

tokenizer = AutoTokenizer.from_pretrained('intfloat/e5-large-v2')
model = AutoModel.from_pretrained('intfloat/e5-large-v2')

# Tokenizar los textos de entrada
batch_dict = tokenizer(input_texts, max_length=512, padding=True, truncation=True, return_tensors='pt')

outputs = model(**batch_dict)
embeddings = average_pool(outputs.last_hidden_state, batch_dict['attention_mask'])

# Normalizar embeddings
embeddings = F.normalize(embeddings, p=2, dim=1)
scores = (embeddings[:2] @ embeddings[2:].T) * 100
print(scores.tolist())

Para más detalles sobre el entrenamiento, consulta nuestro artículo en arxiv. A continuación se muestra un ejemplo de uso con sentence_transformers.

from sentence_transformers import SentenceTransformer
model = SentenceTransformer('intfloat/e5-large-v2')
input_texts = [
	'query: how much protein should a female eat',
	'query: summit define',
	"passage: As a general guideline, the CDC's average requirement of protein for women ages 19 to 70 is 46 grams per day. But, as you can see from this chart, you'll need to increase that if you're expecting or training for a marathon. Check out the chart below to see how much protein you should be eating each day.",
	"passage: Definition of summit for English Language Learners. : 1  the highest point of a mountain : the top of a mountain. : 2  the highest level. : 3  a meeting or series of meetings between the leaders of two or more governments."
]
embeddings = model.encode(input_texts, normalize_embeddings=True)

Funcionalidades

24 capas
Tamaño de embedding de 1024
Compatible con Sentence Transformers
Preentrenamiento contrastivo débilmente supervisado

Casos de uso

Clasificación de pasajes en recuperación en abierto
Tareas simétricas como la similitud semántica y la recuperación de paráfrasis
Uso de embeddings como características, como clasificación lineal por sondeo, agrupamiento