MCP HubMCP Hub
スキル一覧に戻る

abstraction-concrete-examples

lyndonkl
更新日 Yesterday
83 閲覧
5
5
GitHubで表示
テストwordaitestingdesign

について

このスキルは、抽象的な原則と具体的な実装の間を行き来することで、開発者が概念を説明するのを支援します。階層化されたドキュメントの作成、複雑な問題の分解、戦略と実行の間のギャップを埋めるのに理想的です。ユーザーが異なる深さでの説明や、抽象的な概念をより具体的にすることに言及した際にご利用ください。

クイックインストール

Claude Code

推奨
プラグインコマンド推奨
/plugin add https://github.com/lyndonkl/claude
Git クローン代替
git clone https://github.com/lyndonkl/claude.git ~/.claude/skills/abstraction-concrete-examples

このコマンドをClaude Codeにコピー&ペーストしてスキルをインストールします

ドキュメント

Abstraction Ladder Framework

Table of Contents

Purpose

Create structured abstraction ladders showing how concepts translate from high-level principles to concrete, actionable examples. This bridges communication gaps, reveals hidden assumptions, and tests whether abstract ideas work in practice.

When to Use This Skill

  • User needs to explain same concept to different expertise levels
  • Task requires moving between "why" (abstract) and "how" (concrete)
  • Identifying edge cases by testing principles against specific scenarios
  • Designing layered documentation (overview → details → specifics)
  • Decomposing complex problems into actionable steps
  • Validating that high-level goals translate to concrete actions
  • Bridging strategy and execution gaps

Trigger phrases: "abstraction levels", "make this concrete", "explain at different levels", "from principles to implementation", "high-level and detailed view"

What is an Abstraction Ladder?

A multi-level structure (typically 3-5 levels) connecting universal principles to concrete details:

  • Level 1 (Abstract): Universal principles, theories, values
  • Level 2: Frameworks, standards, categories
  • Level 3 (Middle): Methods, approaches, general examples
  • Level 4: Specific implementations, concrete instances
  • Level 5 (Concrete): Precise details, measurements, edge cases

Quick Example:

  • L1: "Software should be maintainable"
  • L2: "Use modular architecture"
  • L3: "Apply dependency injection"
  • L4: "UserService injects IUserRepository"
  • L5: constructor(private repo: IUserRepository) {}

Workflow

Copy this checklist and track your progress:

Abstraction Ladder Progress:
- [ ] Step 1: Gather requirements
- [ ] Step 2: Choose approach
- [ ] Step 3: Build the ladder
- [ ] Step 4: Validate quality
- [ ] Step 5: Deliver and explain

Step 1: Gather requirements

Ask the user to clarify topic, purpose, audience, scope (suggest 4 levels), and starting point (top-down, bottom-up, or middle-out). This ensures the ladder serves the user's actual need.

Step 2: Choose approach

For straightforward cases with clear topics → Use resources/template.md. For complex cases with multiple parallel ladders or unusual constraints → Study resources/methodology.md. To see examples → Show user resources/examples/ (api-design.md, hiring-process.md).

Step 3: Build the ladder

Create abstraction-concrete-examples.md with topic, 3-5 distinct abstraction levels, connections between levels, and 2-3 edge cases. Ensure top level is universal, bottom level has measurable specifics, and transitions are logical. Direction options: top-down (principle → examples), bottom-up (observations → principles), or middle-out (familiar → both directions).

Step 4: Validate quality

Self-assess using resources/evaluators/rubric_abstraction_concrete_examples.json. Check: each level is distinct, transitions are clear, top level is universal, bottom level is specific, edge cases reveal insights, assumptions are stated, no topic drift, serves stated purpose. Minimum standard: Average score ≥ 3.5. If any criterion < 3, revise before delivering.

Step 5: Deliver and explain

Present the completed abstraction-concrete-examples.md file. Highlight key insights revealed by the ladder, note interesting edge cases or tensions discovered, and suggest applications based on their original purpose.

Common Patterns

For communication across levels:

  • Share L1-L2 with executives (strategy/principles)
  • Share L2-L3 with managers (approaches/methods)
  • Share L3-L5 with implementers (details/specifics)

For validation:

  • Check if L5 reality matches L1 principles
  • Identify gaps between adjacent levels
  • Find where principles break down

For design:

  • Use L1-L2 to guide decisions
  • Use L3-L4 to specify requirements
  • Use L5 for actual implementation

Guardrails

Do:

  • State assumptions explicitly at each level
  • Test edge cases that challenge the principles
  • Make concrete levels truly concrete (numbers, measurements, specifics)
  • Make abstract levels broadly applicable (not domain-locked)
  • Ensure each level is understandable given the previous level

Don't:

  • Use vague language ("good", "better", "appropriate") without defining terms
  • Make huge conceptual jumps between levels
  • Let different levels drift to different topics
  • Skip the validation step (rubric is required)
  • Front-load expertise - explain clearly for the target audience

Quick Reference

  • Template for standard cases: resources/template.md
  • Methodology for complex cases: resources/methodology.md
  • Examples to study: resources/examples/api-design.md, resources/examples/hiring-process.md
  • Quality rubric: resources/evaluators/rubric_abstraction_concrete_examples.json

GitHub リポジトリ

lyndonkl/claude
パス: skills/abstraction-concrete-examples

関連スキル

content-collections

メタ

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

スキルを見る

creating-opencode-plugins

メタ

This skill provides the structure and API specifications for creating OpenCode plugins that hook into 25+ event types like commands, files, and LSP operations. It offers implementation patterns for JavaScript/TypeScript modules that intercept and extend the AI assistant's lifecycle. Use it when you need to build event-driven plugins for monitoring, custom handling, or extending OpenCode's capabilities.

スキルを見る

evaluating-llms-harness

テスト

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

スキルを見る

sglang

メタ

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

スキルを見る