Back to Skills

faiss

davila7
Updated Today
5 views
15,516
1,344
15,516
View on GitHub
OtherRAGFAISSSimilarity SearchVector SearchFacebook AIGPU AccelerationBillion-ScaleK-NNHNSWHigh PerformanceLarge Scale

About

FAISS is Facebook's library for efficient similarity search and clustering of dense vectors, supporting billion-scale datasets and GPU acceleration. It's ideal for high-performance applications requiring fast k-NN search or large-scale vector retrieval without metadata filtering. Use it when low-latency similarity search on massive vector collections is critical.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/davila7/claude-code-templates
Git CloneAlternative
git clone https://github.com/davila7/claude-code-templates.git ~/.claude/skills/faiss

Copy and paste this command in Claude Code to install this skill

Documentation

FAISS - Efficient Similarity Search

Facebook AI's library for billion-scale vector similarity search.

When to use FAISS

Use FAISS when:

  • Need fast similarity search on large vector datasets (millions/billions)
  • GPU acceleration required
  • Pure vector similarity (no metadata filtering needed)
  • High throughput, low latency critical
  • Offline/batch processing of embeddings

Metrics:

  • 31,700+ GitHub stars
  • Meta/Facebook AI Research
  • Handles billions of vectors
  • C++ with Python bindings

Use alternatives instead:

  • Chroma/Pinecone: Need metadata filtering
  • Weaviate: Need full database features
  • Annoy: Simpler, fewer features

Quick start

Installation

# CPU only
pip install faiss-cpu

# GPU support
pip install faiss-gpu

Basic usage

import faiss
import numpy as np

# Create sample data (1000 vectors, 128 dimensions)
d = 128
nb = 1000
vectors = np.random.random((nb, d)).astype('float32')

# Create index
index = faiss.IndexFlatL2(d)  # L2 distance
index.add(vectors)             # Add vectors

# Search
k = 5  # Find 5 nearest neighbors
query = np.random.random((1, d)).astype('float32')
distances, indices = index.search(query, k)

print(f"Nearest neighbors: {indices}")
print(f"Distances: {distances}")

Index types

1. Flat (exact search)

# L2 (Euclidean) distance
index = faiss.IndexFlatL2(d)

# Inner product (cosine similarity if normalized)
index = faiss.IndexFlatIP(d)

# Slowest, most accurate

2. IVF (inverted file) - Fast approximate

# Create quantizer
quantizer = faiss.IndexFlatL2(d)

# IVF index with 100 clusters
nlist = 100
index = faiss.IndexIVFFlat(quantizer, d, nlist)

# Train on data
index.train(vectors)

# Add vectors
index.add(vectors)

# Search (nprobe = clusters to search)
index.nprobe = 10
distances, indices = index.search(query, k)

3. HNSW (Hierarchical NSW) - Best quality/speed

# HNSW index
M = 32  # Number of connections per layer
index = faiss.IndexHNSWFlat(d, M)

# No training needed
index.add(vectors)

# Search
distances, indices = index.search(query, k)

4. Product Quantization - Memory efficient

# PQ reduces memory by 16-32×
m = 8   # Number of subquantizers
nbits = 8
index = faiss.IndexPQ(d, m, nbits)

# Train and add
index.train(vectors)
index.add(vectors)

Save and load

# Save index
faiss.write_index(index, "large.index")

# Load index
index = faiss.read_index("large.index")

# Continue using
distances, indices = index.search(query, k)

GPU acceleration

# Single GPU
res = faiss.StandardGpuResources()
index_cpu = faiss.IndexFlatL2(d)
index_gpu = faiss.index_cpu_to_gpu(res, 0, index_cpu)  # GPU 0

# Multi-GPU
index_gpu = faiss.index_cpu_to_all_gpus(index_cpu)

# 10-100× faster than CPU

LangChain integration

from langchain_community.vectorstores import FAISS
from langchain_openai import OpenAIEmbeddings

# Create FAISS vector store
vectorstore = FAISS.from_documents(docs, OpenAIEmbeddings())

# Save
vectorstore.save_local("faiss_index")

# Load
vectorstore = FAISS.load_local(
    "faiss_index",
    OpenAIEmbeddings(),
    allow_dangerous_deserialization=True
)

# Search
results = vectorstore.similarity_search("query", k=5)

LlamaIndex integration

from llama_index.vector_stores.faiss import FaissVectorStore
import faiss

# Create FAISS index
d = 1536
faiss_index = faiss.IndexFlatL2(d)

vector_store = FaissVectorStore(faiss_index=faiss_index)

Best practices

  1. Choose right index type - Flat for <10K, IVF for 10K-1M, HNSW for quality
  2. Normalize for cosine - Use IndexFlatIP with normalized vectors
  3. Use GPU for large datasets - 10-100× faster
  4. Save trained indices - Training is expensive
  5. Tune nprobe/ef_search - Balance speed/accuracy
  6. Monitor memory - PQ for large datasets
  7. Batch queries - Better GPU utilization

Performance

Index TypeBuild TimeSearch TimeMemoryAccuracy
FlatFastSlowHigh100%
IVFMediumFastMedium95-99%
HNSWSlowFastestHigh99%
PQMediumFastLow90-95%

Resources

GitHub Repository

davila7/claude-code-templates
Path: cli-tool/components/skills/ai-research/rag-faiss
anthropicanthropic-claudeclaudeclaude-code

Related Skills

pinecone

Development

Pinecone is a fully managed vector database for production AI applications, featuring auto-scaling, low latency (<100ms p95), and hybrid search. It's ideal for developers who need a serverless solution for production RAG, semantic search, or recommendation systems without managing infrastructure. Use it when you require metadata filtering, namespaces, and scaling to billions of vectors.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers and offers key features like tool calling, memory management, and vector store retrieval. Use it for rapid prototyping or deploying production systems like chatbots, autonomous agents, and question-answering tools.

View skill

nemo-curator

Development

nemo-curator is a GPU-accelerated toolkit for curating high-quality, multi-modal datasets for LLM training. It provides fast operations like fuzzy deduplication, quality filtering, and PII redaction, scaling across GPUs with RAPIDS. Use it to efficiently clean web-scraped data or deduplicate large corpora.

View skill

dspy

Meta

DSPy is a declarative framework for systematically building and optimizing complex AI systems like RAG pipelines and agents. It enables developers to program language models through modular components while automatically optimizing prompts using data-driven methods. Use it when you need maintainable, portable AI workflows instead of manual prompt engineering.

View skill