counsel
について
このカウンセルスキルは、「[専門家]のようにコードを書く」や「ベストプラクティス」といったフレーズでトリガーされると、文書化された専門家の視点をシミュレートして、コードのガイダンスやスタイル論争を支援します。このスキルは、確信度を明示し、先行研究を引用し、専門家そのものではなくシミュレーションを行っていることを明確にします。慣用的なコードレビュー、パネル形式の議論、文書化された専門家のアプローチの理解に、このスキルをご活用ください。
クイックインストール
Claude Code
推奨/plugin add https://github.com/majiayu000/claude-skill-registrygit clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/counselこのコマンドをClaude Codeにコピー&ペーストしてスキルをインストールします
ドキュメント
counsel
Simulate expert perspectives for code guidance, style, and debates.
When This Activates
- "code like [expert name]", "write like [expert]"
- "what would [expert] say", "ask [expert]"
- "review", "audit", "panel", "guidance"
- "idiomatic", "best practice", "clean code"
- Domain keywords from curated profiles (see inference.md)
Core Constraint
You are NOT the expert. You are simulating their perspective based on documented work.
Required behaviors:
- State confidence explicitly (X/10)
- Cite prior work when giving opinions
- Use "would likely" not "would"
- Flag when simulation confidence is low
- Check calibrations before generating
Process
Step 0: Load Calibrations
Read .claude/logs/counsel-calibrations.jsonl if it exists.
Apply all calibrations to matching expert simulations.
Step 0.5: Load Blocklist
Read ~/.claude/counsel-blocklist.json if it exists. Build excluded set from blocked profile names.
These profiles are invisible to detection, paneling, and summoning.
If user explicitly requests a blocked profile by name, refuse with:
"⚠️ [profile] is on your blocklist. Use
/counsel:unblock [name]to remove."
Step 1: Detect Expert
Follow inference.md detection order:
- Explicit name mention → direct match
- Trigger keywords → match to curated profile
- File context → infer from extensions/imports
- Domain signals → topic-based routing
- No match → ask user or provide generic guidance
Step 2: Load Profile
CRITICAL: Lazy loading only. After Step 1 detection, load ONLY the matched profile. Never preload multiple profiles. For panels, load max 3-4 profiles.
If curated profile exists in references/profiles/:
- Read full profile
- Apply confidence rules from confidence.md
- Note: base 6/10, apply modifiers
If no curated profile:
- Use dynamic simulation (base 4/10)
- Add low-confidence warning
- Suggest adding curated profile
Step 3: Generate Response
Apply expert's philosophy, voice pattern, typical concerns, and would-never-say guardrails. Display confidence in header, offer calibration at end. See confidence.md for display format.
Output Modes
Single Expert (default)
One expert perspective on the query.
Panel
Multiple experts debate. Triggers on "panel", "debate", "discuss", multi-domain queries, or tradeoff questions. See /counsel:panel for format.
Style Modifier
When "code like [expert]" or "style of [expert]": generate code in expert's documented style with citations and confidence.
Curated Profiles
33 experts available. See inference.md for the complete catalog with domain routing.
Commands
| Command | Purpose |
|---|---|
/counsel:summon [expert] | Explicit single-expert invocation |
/counsel:panel [question] | Multi-expert debate |
/counsel:calibrate [correction] | Correct simulation errors |
/counsel:block [name] | Block a profile from simulations |
/counsel:unblock [name] | Remove a profile from blocklist |
/counsel:blocked | List blocked profiles |
Guardrails
Refuse When
- Confidence would be < 30%
- Expert has no documented public positions
- Topic requires personal opinions not documented views
Never
- Claim certainty about what expert "would" say (use "would likely")
- Invent positions not in documented work
- Simulate without stating confidence
- Skip calibration check
Output Anonymization
Never use expert names in output. Users may reference experts by name in their questions, but all generated responses must use descriptors.
Process:
- User mentions expert (by name or description)
- Identify: "Why is this expert relevant to this question?"
- Generate descriptor from that relevance (e.g., "an immutability advocate")
- Use descriptor in all output — headers, panel labels, citations
Allowed in input: "What would Rich Hickey say about my Redux state?" Required in output: "Channeling an immutability advocate (7/10 confidence)..."
See confidence.md for descriptor examples.
Calibration Protocol
If user says "[Expert] wouldn't say that": acknowledge, ask for correction, log to .claude/logs/counsel-calibrations.jsonl, apply in future. See /counsel:calibrate for details.
GitHub リポジトリ
関連スキル
content-collections
メタThis skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.
creating-opencode-plugins
メタThis skill provides the structure and API specifications for creating OpenCode plugins that hook into 25+ event types like commands, files, and LSP operations. It offers implementation patterns for JavaScript/TypeScript modules that intercept and extend the AI assistant's lifecycle. Use it when you need to build event-driven plugins for monitoring, custom handling, or extending OpenCode's capabilities.
evaluating-llms-harness
テストThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
sglang
メタSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
