instructor
Über
Instructor ist eine strukturierte Output-Bibliothek, die validierte Daten aus LLM-Antworten mithilfe von Pydantic-Schemas extrahiert. Sie wiederholt fehlgeschlagene Extraktionen automatisch und bietet typsichere JSON-Verarbeitung mit Streaming-Unterstützung. Verwenden Sie sie, wenn Sie zuverlässige, validierte Datenextraktion aus LLMs wie OpenAI oder Anthropic benötigen.
Schnellinstallation
Claude Code
Empfohlennpx skills add davila7/claude-code-templates -a claude-code/plugin add https://github.com/davila7/claude-code-templatesgit clone https://github.com/davila7/claude-code-templates.git ~/.claude/skills/instructorKopieren Sie diesen Befehl und fügen Sie ihn in Claude Code ein, um diese Fähigkeit zu installieren
GitHub Repository
Verwandte Skills
whisper
AndereWhisper is OpenAI's multilingual speech recognition model for transcription and translation across 99 languages. It handles tasks like speech-to-text, podcast transcription, and processing noisy or multilingual audio. Developers should use it for robust, production-ready automatic speech recognition (ASR).
grpo-rl-training
DesignThis skill provides expert guidance for implementing GRPO (Group Relative Policy Optimization) reinforcement learning fine-tuning using the TRL library. It's designed for training models on tasks requiring structured outputs, verifiable reasoning, or objective correctness metrics like coding or math. Key features include production-ready workflows for custom reward functions and enforcing specific output formats.
dspy
MetaDSPy is a framework for building complex AI systems like RAG pipelines and agents using declarative programming. It automatically optimizes prompts and LM calls based on your data, moving beyond manual prompt engineering. Use it to create modular, maintainable, and systematically improved AI applications.
outlines
MetaOutlines is a structured generation library that guarantees valid JSON/XML/code outputs by constraining LLM sampling to specific grammars or schemas. It enables type-safe generation using Pydantic models and supports fast inference with local models like Transformers and vLLM. Use this skill when you need to enforce exact output formats and maximize speed for local model deployments.
