blip-2-vision-language
Über
BLIP-2 ist ein Vision-Language-Framework, das einen eingefrorenen Bild-Encoder mit einem großen Sprachmodell für multimodale Aufgaben verbindet. Nutzen Sie es für Zero-Shot-Bildbeschriftung, visuelle Fragebeantwortung oder Bild-Text-Retrieval ohne aufgabenspezifisches Fine-Tuning. Es ist ideal für Entwickler, die ihren LLM-basierten Anwendungen modernstes visuelles Verständnis hinzufügen möchten.
Schnellinstallation
Claude Code
Empfohlennpx skills add davila7/claude-code-templates -a claude-code/plugin add https://github.com/davila7/claude-code-templatesgit clone https://github.com/davila7/claude-code-templates.git ~/.claude/skills/blip-2-vision-languageKopieren Sie diesen Befehl und fügen Sie ihn in Claude Code ein, um diese Fähigkeit zu installieren
GitHub Repository
Verwandte Skills
stable-diffusion-image-generation
MetaThis skill enables text-to-image generation and image manipulation using Stable Diffusion via HuggingFace Diffusers. It supports image generation from prompts, image-to-image translation, inpainting, and custom pipeline creation. Developers should use it when building applications requiring AI-powered visual content generation or editing.
audiocraft-audio-generation
MetaThis Claude Skill provides text-to-music and text-to-audio generation using Meta's AudioCraft PyTorch library. It enables developers to generate music from descriptions, create sound effects, and perform melody-conditioned music generation. Key capabilities include using the MusicGen and AudioGen models for controllable, high-quality stereo audio output.
whisper
AndereWhisper is OpenAI's multilingual speech recognition model for transcription and translation across 99 languages. It handles tasks like speech-to-text, podcast transcription, and processing noisy or multilingual audio. Developers should use it for robust, production-ready automatic speech recognition (ASR).
segment-anything-model
MetaThe segment-anything-model skill performs zero-shot image segmentation, allowing developers to isolate objects using prompts like points or bounding boxes, or to automatically generate all object masks. It's ideal for building annotation tools, generating training data, or processing images in new domains without task-specific training. Key capabilities include handling interactive prompts and providing strong out-of-the-box performance for various computer vision pipelines.
