Zurück zu Fähigkeiten

stable-diffusion-image-generation

davila7
Aktualisiert 16 days ago
425 Ansichten
18,478
1,685
18,478
Auf GitHub ansehen
MetaImage GenerationStable DiffusionDiffusersText-to-ImageMultimodalComputer Vision

Über

Diese Fähigkeit ermöglicht die Text-zu-Bild-Generierung und Bildbearbeitung mit Stable Diffusion über HuggingFace Diffusers. Sie unterstützt die Bildgenerierung aus Prompts, Bild-zu-Bild-Übersetzung, Inpainting und die Erstellung benutzerdefinierter Pipelines. Entwickler sollten sie einsetzen, wenn sie Anwendungen erstellen, die KI-gestützte visuelle Inhaltserzeugung oder -bearbeitung erfordern.

Schnellinstallation

Claude Code

Empfohlen
Primär
npx skills add davila7/claude-code-templates -a claude-code
Plugin-BefehlAlternativ
/plugin add https://github.com/davila7/claude-code-templates
Git CloneAlternativ
git clone https://github.com/davila7/claude-code-templates.git ~/.claude/skills/stable-diffusion-image-generation

Kopieren Sie diesen Befehl und fügen Sie ihn in Claude Code ein, um diese Fähigkeit zu installieren

GitHub Repository

davila7/claude-code-templates
Pfad: cli-tool/components/skills/ai-research/multimodal-stable-diffusion
0
anthropicanthropic-claudeclaudeclaude-code

Verwandte Skills

blip-2-vision-language

Design

BLIP-2 is a vision-language framework that connects a frozen image encoder with a large language model for multimodal tasks. Use it for zero-shot image captioning, visual question answering, or image-text retrieval without task-specific fine-tuning. It's ideal for developers needing to add state-of-the-art visual understanding to LLM-based applications.

Skill ansehen

audiocraft-audio-generation

Meta

This Claude Skill provides text-to-music and text-to-audio generation using Meta's AudioCraft PyTorch library. It enables developers to generate music from descriptions, create sound effects, and perform melody-conditioned music generation. Key capabilities include using the MusicGen and AudioGen models for controllable, high-quality stereo audio output.

Skill ansehen

whisper

Andere

Whisper is OpenAI's multilingual speech recognition model for transcription and translation across 99 languages. It handles tasks like speech-to-text, podcast transcription, and processing noisy or multilingual audio. Developers should use it for robust, production-ready automatic speech recognition (ASR).

Skill ansehen

segment-anything-model

Meta

The segment-anything-model skill performs zero-shot image segmentation, allowing developers to isolate objects using prompts like points or bounding boxes, or to automatically generate all object masks. It's ideal for building annotation tools, generating training data, or processing images in new domains without task-specific training. Key capabilities include handling interactive prompts and providing strong out-of-the-box performance for various computer vision pipelines.

Skill ansehen