Back to Skills

prompting

danielmiessler
Updated Today
25 views
14
1
14
View on GitHub
Otherai

About

This skill provides Anthropic's prompt and context engineering best practices for developers working with Claude. It helps optimize AI agent performance through principles like clarity, structure, and progressive discovery while treating context as a finite resource. Use it for guidance on prompt engineering, context management, and improving signal-to-noise ratio in your LLM interactions.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/danielmiessler/PAIPlugin
Git CloneAlternative
git clone https://github.com/danielmiessler/PAIPlugin.git ~/.claude/skills/prompting

Copy and paste this command in Claude Code to install this skill

Documentation

Prompting Skill

When to Activate This Skill

  • Prompt engineering questions
  • Context engineering guidance
  • AI agent design
  • Prompt structure help
  • Best practices for LLM prompts
  • Agent configuration

Core Philosophy

Context engineering = Curating optimal set of tokens during LLM inference

Primary Goal: Find smallest possible set of high-signal tokens that maximize desired outcomes

Key Principles

1. Context is Finite Resource

  • LLMs have limited "attention budget"
  • Performance degrades as context grows
  • Every token depletes capacity
  • Treat context as precious

2. Optimize Signal-to-Noise

  • Clear, direct language over verbose explanations
  • Remove redundant information
  • Focus on high-value tokens

3. Progressive Discovery

  • Use lightweight identifiers vs full data dumps
  • Load detailed info dynamically when needed
  • Just-in-time information loading

Markdown Structure Standards

Use clear semantic sections:

  • Background Information: Minimal essential context
  • Instructions: Imperative voice, specific, actionable
  • Examples: Show don't tell, concise, representative
  • Constraints: Boundaries, limitations, success criteria

Writing Style

Clarity Over Completeness

✅ Good: "Validate input before processing" ❌ Bad: "You should always make sure to validate..."

Be Direct

✅ Good: "Use calculate_tax tool with amount and jurisdiction" ❌ Bad: "You might want to consider using..."

Use Structured Lists

✅ Good: Bulleted constraints ❌ Bad: Paragraph of requirements

Context Management

Just-in-Time Loading

Don't load full data dumps - use references and load when needed

Structured Note-Taking

Persist important info outside context window

Sub-Agent Architecture

Delegate subtasks to specialized agents with minimal context

Best Practices Checklist

  • Uses Markdown headers for organization
  • Clear, direct, minimal language
  • No redundant information
  • Actionable instructions
  • Concrete examples
  • Clear constraints
  • Just-in-time loading when appropriate

Anti-Patterns

❌ Verbose explanations ❌ Historical context dumping ❌ Overlapping tool definitions ❌ Premature information loading ❌ Vague instructions ("might", "could", "should")

Supplementary Resources

For full standards: read ${PAI_DIR}/skills/prompting/CLAUDE.md

Based On

Anthropic's "Effective Context Engineering for AI Agents"

GitHub Repository

danielmiessler/PAIPlugin
Path: skills/prompting

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill