Back to Skills

blip-2-vision-language

majiayu000
Updated 10 days ago
13 views
58
9
58
View on GitHub
DesignMultimodalVision-LanguageImage CaptioningVQAZero-Shot

About

BLIP-2 is a vision-language framework that connects frozen image encoders with large language models for multimodal tasks. Use it for zero-shot image captioning, visual question answering, and image-text retrieval without task-specific training. It enables developers to build applications that combine visual understanding with LLM reasoning.

Quick Install

Claude Code

Recommended
Primary
npx skills add majiayu000/claude-skill-registry -a claude-code
Plugin CommandAlternative
/plugin add https://github.com/majiayu000/claude-skill-registry
Git CloneAlternative
git clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/blip-2-vision-language

Copy and paste this command in Claude Code to install this skill

GitHub Repository

majiayu000/claude-skill-registry
Path: skills/blip-2
0

Related Skills

blip-2-vision-language

Design

BLIP-2 is a vision-language framework that connects a frozen image encoder with a large language model for multimodal tasks. Use it for zero-shot image captioning, visual question answering, or image-text retrieval without task-specific fine-tuning. It's ideal for developers needing to add state-of-the-art visual understanding to LLM-based applications.

View skill

stable-diffusion-image-generation

Meta

This skill enables text-to-image generation and image manipulation using Stable Diffusion via HuggingFace Diffusers. It supports image generation from prompts, image-to-image translation, inpainting, and custom pipeline creation. Developers should use it when building applications requiring AI-powered visual content generation or editing.

View skill

audiocraft-audio-generation

Meta

This Claude Skill provides text-to-music and text-to-audio generation using Meta's AudioCraft PyTorch library. It enables developers to generate music from descriptions, create sound effects, and perform melody-conditioned music generation. Key capabilities include using the MusicGen and AudioGen models for controllable, high-quality stereo audio output.

View skill

whisper

Other

Whisper is OpenAI's multilingual speech recognition model for transcription and translation across 99 languages. It handles tasks like speech-to-text, podcast transcription, and processing noisy or multilingual audio. Developers should use it for robust, production-ready automatic speech recognition (ASR).

View skill