snowflake-arctic-embed-s
Snowflake
Similitud de oraciones
snowflake-arctic-embed-s es un modelo de texto embedido basado en el modelo intfloat/e5-small-unsupervised. Este modelo pequeño no sacrifica la precisión de recuperación a pesar de su reducido tamaño, con solo 33 millones de parámetros y 384 dimensiones. Está optimizado para el rendimiento y es ideal para escalar grandes conjuntos de datos. Este modelo se enfoca en crear modelos de recuperación de alta calidad y lograr un rendimiento de vanguardia en la clasificación líder de MTEB/BEIR.
Como usar
Usando Sentence Transformers
from sentence_transformers import SentenceTransformer
model = SentenceTransformer("Snowflake/snowflake-arctic-embed-s")
queries = ['what is snowflake?', 'Where can I get the best tacos?']
documents = ['The Data Cloud!', 'Mexico City of Course!']
query_embeddings = model.encode(queries, prompt_name="query")
document_embeddings = model.encode(documents)
scores = query_embeddings @ document_embeddings.T
for query, query_scores in zip(queries, scores):
doc_score_pairs = list(zip(documents, query_scores))
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
# Output passages & scores
print("Query:", query)
for document, score in doc_score_pairs:
print(score, document)
Usando Huggingface transformers
import torch
from transformers import AutoModel, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained('Snowflake/snowflake-arctic-embed-s')
model = AutoModel.from_pretrained('Snowflake/snowflake-arctic-embed-s', add_pooling_layer=False)
model.eval()
query_prefix = 'Represent this sentence for searching relevant passages: '
queries = ['what is snowflake?', 'Where can I get the best tacos?']
queries_with_prefix = ["{}{}".format(query_prefix, i) for i in queries]
query_tokens = tokenizer(queries_with_prefix, padding=True, truncation=True, return_tensors='pt', max_length=512)
documents = ['The Data Cloud!', 'Mexico City of Course!']
document_tokens = tokenizer(documents, padding=True, truncation=True, return_tensors='pt', max_length=512)
# Compute token embeddings
with torch.no_grad():
query_embeddings = model(**query_tokens)[0][:, 0]
document_embeddings = model(**document_tokens)[0][:, 0]
# normalize embeddings
query_embeddings = torch.nn.functional.normalize(query_embeddings, p=2, dim=1)
document_embeddings = torch.nn.functional.normalize(document_embeddings, p=2, dim=1)
scores = torch.mm(query_embeddings, document_embeddings.transpose(0, 1))
for query, query_scores in zip(queries, scores):
doc_score_pairs = list(zip(documents, query_scores))
doc_score_pairs = sorted(doc_score_pairs, key=lambda x: x[1], reverse=True)
#Output passages & scores
print("Query:", query)
for document, score in doc_score_pairs:
print(score, document)
Usando Transformers.js
import { pipeline, dot } from '@xenova/transformers';
// Create feature extraction pipeline
const extractor = await pipeline('feature-extraction', 'Snowflake/snowflake-arctic-embed-s', {
quantized: false, // Comment out this line to use the quantized version
});
// Generate sentence embeddings
const sentences = [
'Represent this sentence for searching relevant passages: Where can I get the best tacos?',
'The Data Cloud!',
'Mexico City of Course!',
]
const output = await extractor(sentences, { normalize: true, pooling: 'cls' });
// Compute similarity scores
const [source_embeddings, ...document_embeddings ] = output.tolist();
const similarities = document_embeddings.map(x => dot(source_embeddings, x));
console.log(similarities); // [0.48255123876493394, 0.5659250100112143]
Funcionalidades
- Modelo de recuperación de texto de alta calidad
- 33 millones de parámetros
- 384 dimensiones de embedding
- Optimizado para un rendimiento de recuperación impresionante
- Basado en el modelo intfloat/e5-small-unsupervised
- Compatible con múltiples bibliotecas: Sentence Transformers, Huggingface transformers, y Transformers.js
Casos de uso
- Mejorar los algoritmos de búsqueda en aplicaciones web
- Aplicaciones de búsqueda y recuperación de documentos
- Sistemas de recomendación de contenido
- Clasificación y organización de grandes conjuntos de datos textuales