claude-cookbooks
About
This skill provides comprehensive code examples and tutorials for Claude API integration, including tool use, multimodal features, and RAG implementations. Use it when building Claude-powered applications, learning API usage, or implementing advanced patterns like AI agents. It offers practical guidance for developers working with Claude's capabilities across various integration scenarios.
Documentation
Claude Cookbooks Skill
Comprehensive code examples and guides for building with Claude AI, sourced from the official Anthropic cookbooks repository.
When to Use This Skill
This skill should be triggered when:
- Learning how to use Claude API
- Implementing Claude integrations
- Building applications with Claude
- Working with tool use and function calling
- Implementing multimodal features (vision, image analysis)
- Setting up RAG (Retrieval Augmented Generation)
- Integrating Claude with third-party services
- Building AI agents with Claude
- Optimizing prompts for Claude
- Implementing advanced patterns (caching, sub-agents, etc.)
Quick Reference
Basic API Usage
import anthropic
client = anthropic.Anthropic(api_key="your-api-key")
# Simple message
response = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{
"role": "user",
"content": "Hello, Claude!"
}]
)
Tool Use (Function Calling)
# Define a tool
tools = [{
"name": "get_weather",
"description": "Get current weather for a location",
"input_schema": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"}
},
"required": ["location"]
}
}]
# Use the tool
response = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
tools=tools,
messages=[{"role": "user", "content": "What's the weather in San Francisco?"}]
)
Vision (Image Analysis)
# Analyze an image
response = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
messages=[{
"role": "user",
"content": [
{
"type": "image",
"source": {
"type": "base64",
"media_type": "image/jpeg",
"data": base64_image
}
},
{"type": "text", "text": "Describe this image"}
]
}]
)
Prompt Caching
# Use prompt caching for efficiency
response = client.messages.create(
model="claude-3-5-sonnet-20241022",
max_tokens=1024,
system=[{
"type": "text",
"text": "Large system prompt here...",
"cache_control": {"type": "ephemeral"}
}],
messages=[{"role": "user", "content": "Your question"}]
)
Key Capabilities Covered
1. Classification
- Text classification techniques
- Sentiment analysis
- Content categorization
- Multi-label classification
2. Retrieval Augmented Generation (RAG)
- Vector database integration
- Semantic search
- Context retrieval
- Knowledge base queries
3. Summarization
- Document summarization
- Meeting notes
- Article condensing
- Multi-document synthesis
4. Text-to-SQL
- Natural language to SQL queries
- Database schema understanding
- Query optimization
- Result interpretation
5. Tool Use & Function Calling
- Tool definition and schema
- Parameter validation
- Multi-tool workflows
- Error handling
6. Multimodal
- Image analysis and OCR
- Chart/graph interpretation
- Visual question answering
- Image generation integration
7. Advanced Patterns
- Agent architectures
- Sub-agent delegation
- Prompt optimization
- Cost optimization with caching
Repository Structure
The cookbooks are organized into these main categories:
- capabilities/ - Core AI capabilities (classification, RAG, summarization, text-to-SQL)
- tool_use/ - Function calling and tool integration examples
- multimodal/ - Vision and image-related examples
- patterns/ - Advanced patterns like agents and workflows
- third_party/ - Integrations with external services (Pinecone, LlamaIndex, etc.)
- claude_agent_sdk/ - Agent SDK examples and templates
- misc/ - Additional utilities (PDF upload, JSON mode, evaluations, etc.)
Reference Files
This skill includes comprehensive documentation in references/:
- main_readme.md - Main repository overview
- capabilities.md - Core capabilities documentation
- tool_use.md - Tool use and function calling guides
- multimodal.md - Vision and multimodal capabilities
- third_party.md - Third-party integrations
- patterns.md - Advanced patterns and agents
- index.md - Complete reference index
Common Use Cases
Building a Customer Service Agent
- Define tools for CRM access, ticket creation, knowledge base search
- Use tool use API to handle function calls
- Implement conversation memory
- Add fallback mechanisms
See: references/tool_use.md#customer-service
Implementing RAG
- Create embeddings of your documents
- Store in vector database (Pinecone, etc.)
- Retrieve relevant context on query
- Augment Claude's response with context
See: references/capabilities.md#rag
Processing Documents with Vision
- Convert document to images or PDF
- Use vision API to extract content
- Structure the extracted data
- Validate and post-process
See: references/multimodal.md#vision
Building Multi-Agent Systems
- Define specialized agents for different tasks
- Implement routing logic
- Use sub-agents for delegation
- Aggregate results
See: references/patterns.md#agents
Best Practices
API Usage
- Use appropriate model for task (Sonnet for balance, Haiku for speed, Opus for complex tasks)
- Implement retry logic with exponential backoff
- Handle rate limits gracefully
- Monitor token usage for cost optimization
Prompt Engineering
- Be specific and clear in instructions
- Provide examples when needed
- Use system prompts for consistent behavior
- Structure outputs with JSON mode when needed
Tool Use
- Define clear, specific tool schemas
- Validate inputs and outputs
- Handle errors gracefully
- Keep tool descriptions concise but informative
Multimodal
- Use high-quality images (higher resolution = better results)
- Be specific about what to extract/analyze
- Respect size limits (5MB per image)
- Use appropriate image formats (JPEG, PNG, GIF, WebP)
Performance Optimization
Prompt Caching
- Cache large system prompts
- Cache frequently used context
- Monitor cache hit rates
- Balance caching vs. fresh content
Cost Optimization
- Use Haiku for simple tasks
- Implement prompt caching for repeated context
- Set appropriate max_tokens
- Batch similar requests
Latency Optimization
- Use streaming for long responses
- Minimize message history
- Optimize image sizes
- Use appropriate timeout values
Resources
Official Documentation
Community
Learning Resources
Working with This Skill
For Beginners
Start with references/main_readme.md and explore basic examples in references/capabilities.md
For Specific Features
- Tool use →
references/tool_use.md - Vision →
references/multimodal.md - RAG →
references/capabilities.md#rag - Agents →
references/patterns.md#agents
For Code Examples
Each reference file contains practical, copy-pasteable code examples
Examples Available
The cookbook includes 50+ practical examples including:
- Customer service chatbot with tool use
- RAG with Pinecone vector database
- Document summarization
- Image analysis and OCR
- Chart/graph interpretation
- Natural language to SQL
- Content moderation filter
- Automated evaluations
- Multi-agent systems
- Prompt caching optimization
Notes
- All examples use official Anthropic Python SDK
- Code is production-ready with error handling
- Examples follow current API best practices
- Regular updates from Anthropic team
- Community contributions welcome
Skill Source
This skill was created from the official Anthropic Claude Cookbooks repository: https://github.com/anthropics/claude-cookbooks
Repository cloned and processed on: 2025-10-29
Quick Install
/plugin add https://github.com/2025Emma/vibe-coding-cn/tree/main/claude-cookbooksCopy and paste this command in Claude Code to install this skill
GitHub 仓库
Related Skills
sglang
MetaSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
llamaguard
OtherLlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
