qdrant-vector-search
Über
Die qdrant-vector-search-Skill bietet eine leistungsstarke Vektor-Ähnlichkeitssuche für den Aufbau von produktiven RAG-Systemen. Sie ermöglicht schnelle Nearest-Neighbor-Suche, Hybridsuche mit Filterung und skalierbare Vektorspeicherung auf Rust-Basis. Nutzen Sie sie, wenn Sie semantische Suche mit niedriger Latenz, horizontaler Skalierbarkeit und vollständiger Datenkontrolle benötigen.
Schnellinstallation
Claude Code
Empfohlennpx skills add davila7/claude-code-templates -a claude-code/plugin add https://github.com/davila7/claude-code-templatesgit clone https://github.com/davila7/claude-code-templates.git ~/.claude/skills/qdrant-vector-searchKopieren Sie diesen Befehl und fügen Sie ihn in Claude Code ein, um diese Fähigkeit zu installieren
GitHub Repository
Verwandte Skills
huggingface-tokenizers
DokumenteThis skill provides high-performance tokenization using HuggingFace's Rust-based library, processing 1GB of text in under 20 seconds. It supports BPE, WordPiece, and Unigram algorithms while enabling custom tokenizer training and alignment tracking. Use it when you need production-fast tokenization or to build custom tokenizers integrated with the transformers ecosystem.
crewai-multi-agent
MetaCrewAI is a lightweight multi-agent orchestration framework for building teams of specialized AI agents that collaborate autonomously on complex tasks. It enables role-based agent collaboration with memory and supports sequential or hierarchical workflows for production use. The framework is built without LangChain dependencies for lean, fast execution.
chroma
DokumentationChroma is an open-source embedding database for AI applications that provides vector search, metadata filtering, and a simple API. It's ideal for building RAG applications and semantic search, scaling from local development to production. Use it when you need a self-hosted vector database for document retrieval and embedding storage.
training-llms-megatron
DesignThis skill trains massive LLMs (2B-462B parameters) using NVIDIA's Megatron-Core framework for maximum GPU efficiency. Use it when training models over 1B parameters and needing advanced parallelism like tensor, pipeline, or expert parallelism. It's a production-ready framework proven on models like Nemotron and LLaMA.
