Back to Skills

sentence-transformers

davila7
Updated Today
5 views
15,516
1,344
15,516
View on GitHub
MetaSentence TransformersEmbeddingsSemantic SimilarityRAGMultilingualMultimodalPre-Trained ModelsClusteringSemantic SearchProduction

About

The sentence-transformers skill provides a production-ready framework for generating high-quality text and image embeddings using over 5,000 pre-trained models. It's ideal for developers needing semantic search, RAG implementations, clustering, or multilingual similarity tasks. Use it when you require state-of-the-art embeddings for retrieval or analysis in a scalable environment.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/davila7/claude-code-templates
Git CloneAlternative
git clone https://github.com/davila7/claude-code-templates.git ~/.claude/skills/sentence-transformers

Copy and paste this command in Claude Code to install this skill

Documentation

Sentence Transformers - State-of-the-Art Embeddings

Python framework for sentence and text embeddings using transformers.

When to use Sentence Transformers

Use when:

  • Need high-quality embeddings for RAG
  • Semantic similarity and search
  • Text clustering and classification
  • Multilingual embeddings (100+ languages)
  • Running embeddings locally (no API)
  • Cost-effective alternative to OpenAI embeddings

Metrics:

  • 15,700+ GitHub stars
  • 5000+ pre-trained models
  • 100+ languages supported
  • Based on PyTorch/Transformers

Use alternatives instead:

  • OpenAI Embeddings: Need API-based, highest quality
  • Instructor: Task-specific instructions
  • Cohere Embed: Managed service

Quick start

Installation

pip install sentence-transformers

Basic usage

from sentence_transformers import SentenceTransformer

# Load model
model = SentenceTransformer('all-MiniLM-L6-v2')

# Generate embeddings
sentences = [
    "This is an example sentence",
    "Each sentence is converted to a vector"
]

embeddings = model.encode(sentences)
print(embeddings.shape)  # (2, 384)

# Cosine similarity
from sentence_transformers.util import cos_sim
similarity = cos_sim(embeddings[0], embeddings[1])
print(f"Similarity: {similarity.item():.4f}")

Popular models

General purpose

# Fast, good quality (384 dim)
model = SentenceTransformer('all-MiniLM-L6-v2')

# Better quality (768 dim)
model = SentenceTransformer('all-mpnet-base-v2')

# Best quality (1024 dim, slower)
model = SentenceTransformer('all-roberta-large-v1')

Multilingual

# 50+ languages
model = SentenceTransformer('paraphrase-multilingual-MiniLM-L12-v2')

# 100+ languages
model = SentenceTransformer('paraphrase-multilingual-mpnet-base-v2')

Domain-specific

# Legal domain
model = SentenceTransformer('nlpaueb/legal-bert-base-uncased')

# Scientific papers
model = SentenceTransformer('allenai/specter')

# Code
model = SentenceTransformer('microsoft/codebert-base')

Semantic search

from sentence_transformers import SentenceTransformer, util

model = SentenceTransformer('all-MiniLM-L6-v2')

# Corpus
corpus = [
    "Python is a programming language",
    "Machine learning uses algorithms",
    "Neural networks are powerful"
]

# Encode corpus
corpus_embeddings = model.encode(corpus, convert_to_tensor=True)

# Query
query = "What is Python?"
query_embedding = model.encode(query, convert_to_tensor=True)

# Find most similar
hits = util.semantic_search(query_embedding, corpus_embeddings, top_k=3)
print(hits)

Similarity computation

# Cosine similarity
similarity = util.cos_sim(embedding1, embedding2)

# Dot product
similarity = util.dot_score(embedding1, embedding2)

# Pairwise cosine similarity
similarities = util.cos_sim(embeddings, embeddings)

Batch encoding

# Efficient batch processing
sentences = ["sentence 1", "sentence 2", ...] * 1000

embeddings = model.encode(
    sentences,
    batch_size=32,
    show_progress_bar=True,
    convert_to_tensor=False  # or True for PyTorch tensors
)

Fine-tuning

from sentence_transformers import InputExample, losses
from torch.utils.data import DataLoader

# Training data
train_examples = [
    InputExample(texts=['sentence 1', 'sentence 2'], label=0.8),
    InputExample(texts=['sentence 3', 'sentence 4'], label=0.3),
]

train_dataloader = DataLoader(train_examples, batch_size=16)

# Loss function
train_loss = losses.CosineSimilarityLoss(model)

# Train
model.fit(
    train_objectives=[(train_dataloader, train_loss)],
    epochs=10,
    warmup_steps=100
)

# Save
model.save('my-finetuned-model')

LangChain integration

from langchain_community.embeddings import HuggingFaceEmbeddings

embeddings = HuggingFaceEmbeddings(
    model_name="sentence-transformers/all-mpnet-base-v2"
)

# Use with vector stores
from langchain_chroma import Chroma

vectorstore = Chroma.from_documents(
    documents=docs,
    embedding=embeddings
)

LlamaIndex integration

from llama_index.embeddings.huggingface import HuggingFaceEmbedding

embed_model = HuggingFaceEmbedding(
    model_name="sentence-transformers/all-mpnet-base-v2"
)

from llama_index.core import Settings
Settings.embed_model = embed_model

# Use in index
index = VectorStoreIndex.from_documents(documents)

Model selection guide

ModelDimensionsSpeedQualityUse Case
all-MiniLM-L6-v2384FastGoodGeneral, prototyping
all-mpnet-base-v2768MediumBetterProduction RAG
all-roberta-large-v11024SlowBestHigh accuracy needed
paraphrase-multilingual768MediumGoodMultilingual

Best practices

  1. Start with all-MiniLM-L6-v2 - Good baseline
  2. Normalize embeddings - Better for cosine similarity
  3. Use GPU if available - 10× faster encoding
  4. Batch encoding - More efficient
  5. Cache embeddings - Expensive to recompute
  6. Fine-tune for domain - Improves quality
  7. Test different models - Quality varies by task
  8. Monitor memory - Large models need more RAM

Performance

ModelSpeed (sentences/sec)MemoryDimension
MiniLM~2000120MB384
MPNet~600420MB768
RoBERTa~3001.3GB1024

Resources

GitHub Repository

davila7/claude-code-templates
Path: cli-tool/components/skills/ai-research/rag-sentence-transformers
anthropicanthropic-claudeclaudeclaude-code

Related Skills

axolotl

Design

This skill provides expert guidance for fine-tuning LLMs using the Axolotl framework, helping developers configure YAML files and implement advanced techniques like LoRA/QLoRA and DPO/KTO. Use it when working with Axolotl features, debugging code, or learning best practices for fine-tuning across 100+ models. It offers comprehensive assistance including multimodal support and performance optimization.

View skill

training-llms-megatron

Design

This skill trains massive language models (2B-462B parameters) using NVIDIA's Megatron-Core framework for maximum GPU efficiency. Use it when training models over 1B parameters, requiring advanced parallelism strategies like tensor or pipeline, or needing production-ready performance. It's a proven framework used for models like Nemotron and LLaMA.

View skill

pinecone

Development

Pinecone is a fully managed vector database for production AI applications, featuring auto-scaling, low latency (<100ms p95), and hybrid search. It's ideal for developers who need a serverless solution for production RAG, semantic search, or recommendation systems without managing infrastructure. Use it when you require metadata filtering, namespaces, and scaling to billions of vectors.

View skill

tensorrt-llm

Other

TensorRT-LLM is an NVIDIA-optimized library for deploying LLMs on NVIDIA GPUs, delivering up to 100x faster inference than PyTorch. Use it for production serving where you need maximum throughput, low latency, and support for features like quantization (FP8/INT4), in-flight batching, and multi-GPU scaling.

View skill