MCP HubMCP Hub
返回技能列表

reconnaissance

DNYoussef
更新于 Today
27 次查看
9
2
9
在 GitHub 上查看
其他ai

关于

The reconnaissance skill performs systematic technology and market intelligence gathering by extracting actionable insights from repositories, papers, and competitive landscapes. It's designed for developers needing structured analysis from diverse sources, using tools like WebSearch, Bash, and Grep for active probing and pattern extraction. Use it for competitive analysis, technology scans, or strategic research tasks.

快速安装

Claude Code

推荐
插件命令推荐
/plugin add https://github.com/DNYoussef/context-cascade
Git 克隆备选方式
git clone https://github.com/DNYoussef/context-cascade.git ~/.claude/skills/reconnaissance

在 Claude Code 中复制并粘贴此命令以安装该技能

技能文档

TIER 1: CRITICAL SECTIONS

Overview

Reconnaissance is a systematic intelligence-gathering skill designed to extract actionable insights from diverse sources including GitHub repositories, academic papers, competitive products, and market landscapes. Unlike general research which synthesizes existing knowledge, reconnaissance actively probes, extracts, and structures novel intelligence for strategic decision-making.

The skill operates on the principle that raw information must be transformed through structured extraction into decision-ready artifacts. It employs a three-phase methodology: Discovery (identify sources), Extraction (pull structured data), and Analysis (synthesize intelligence).

Reconnaissance is essential for:

  • Pre-project feasibility assessments
  • Technology stack evaluation
  • Competitive positioning
  • Academic literature mining for implementation patterns
  • Market opportunity identification

Core Principles

Reconnaissance operates on 5 fundamental principles:

Principle 1: Source Diversification

Never rely on a single source type. Cross-reference findings across repositories, papers, documentation, and market signals.

In Practice:

  • Query GitHub + arXiv + product pages for any technology assessment
  • Compare stated claims against actual implementations
  • Verify recency and maintenance status of all sources

Principle 2: Structured Extraction

Raw information is useless without structure. Every reconnaissance output must follow defined schemas.

In Practice:

  • Use MANIFEST.md templates for repository extractions
  • Create comparison charts for multi-source analysis
  • Tag all findings with source, date, and confidence level

Principle 3: Evidence Grounding

Every claim must trace back to a verifiable source with explicit confidence ceiling.

In Practice:

  • Link directly to code files, paper sections, or documentation pages
  • Apply confidence ceilings: witnessed (0.95), reported (0.70), inferred (0.70)
  • Mark gaps explicitly rather than inferring without evidence

Principle 4: Actionable Output

Intelligence must drive decisions. Abstract findings without recommendations waste effort.

In Practice:

  • End every recon with specific recommendations
  • Include effort estimates and risk assessments
  • Prioritize findings by strategic impact

Principle 5: Temporal Awareness

Technology landscapes change rapidly. All reconnaissance has an expiration date.

In Practice:

  • Date-stamp all outputs prominently
  • Note last-commit dates for repositories
  • Flag papers older than 2 years as potentially outdated

When to Use

Use Reconnaissance When:

  • Evaluating a new technology for adoption
  • Analyzing competitor products or approaches
  • Extracting implementation patterns from papers
  • Assessing open-source project viability
  • Mapping market landscape before product decisions
  • Pre-mortem analysis for project risks

Do NOT Use Reconnaissance When:

  • Synthesizing already-gathered sources (use literature-synthesis)
  • Single-source deep reading (use academic-reading-workflow)
  • General question answering (use researcher)
  • Implementation planning (use research-driven-planning)

Main Workflow

Phase 1: Scope Definition

Agent: intent-analyzer Purpose: Clarify reconnaissance objectives and boundaries

Inputs:

  • Target domain or technology
  • Strategic questions to answer
  • Time and depth constraints
  • Output format requirements

Outputs:

  • Scoped reconnaissance brief
  • Source categories to probe
  • Success criteria

Phase 2: Source Discovery

Agent: research-agent Purpose: Identify and qualify sources

Process:

  1. GitHub search for repositories matching domain
  2. arXiv/papers search for academic sources
  3. Product/company identification for competitive analysis
  4. Documentation and blog discovery
  5. Source qualification (recency, authority, relevance)

Outputs:

  • Qualified source list with metadata
  • Source-type distribution analysis
  • Gap identification (what sources are missing?)

Phase 3: Structured Extraction

Agent: code-extractor or document-analyst Purpose: Pull structured data from each source

Process:

  1. Apply extraction template per source type:
    • Repository: MANIFEST.md (structure, deps, patterns)
    • Paper: KEY-FINDINGS.md (methods, results, code availability)
    • Product: FEATURE-MATRIX.md (capabilities, pricing, integration)
  2. Normalize extracted data to common schema
  3. Cross-reference overlapping claims

Outputs:

  • Per-source extraction documents
  • Normalized data tables
  • Cross-reference matrix

Phase 4: Intelligence Synthesis

Agent: synthesis-agent Purpose: Transform extractions into actionable intelligence

Process:

  1. Identify patterns across sources
  2. Surface contradictions and gaps
  3. Generate comparison charts
  4. Formulate recommendations with confidence ceilings
  5. Create decision frameworks

Outputs:

  • COMPREHENSIVE-ANALYSIS.md
  • COMPARISON-CHART.md
  • RECOMMENDATIONS.md with action items
  • GAP-INVENTORY.md with specific missing capabilities

Phase 5: Gap-Filling Extraction (CRITICAL)

Agent: code-extractor + research-agent Purpose: Find external repos/research to fill discovered gaps, extract ONLY applicable parts, delete everything else

Process:

  1. Gap Identification: Take gaps from Phase 4 GAP-INVENTORY.md
  2. Source Discovery: Search GitHub/arXiv/papers for solutions to each gap
  3. Pre-Mortem Analysis: Before cloning, assess applicability:
    • Does target project already have this capability?
    • Is the architecture compatible?
    • What's the integration effort?
  4. Selective Clone: Clone promising repos to temp directory
  5. Extraction Criteria: For each cloned repo, evaluate:
    KEEP if:
    - Directly fills identified gap
    - Architecture compatible with target
    - Integration effort < gap severity
    
    DELETE if:
    - Target already has equivalent functionality
    - Architecture incompatible
    - Duplicates existing capabilities
    - Integration effort exceeds value
    
  6. Focused Extraction: Extract ONLY applicable patterns:
    • Copy relevant source files to extraction folder
    • Create integration notes per extracted component
    • Map extracted code to target project structure
  7. Cleanup: Delete cloned repos after extraction complete
  8. Extraction Report: Document what was kept, what was deleted, and why

Outputs:

  • {target}-gap-extractions/ folder with:
    • Extracted source files (only applicable ones)
    • EXTRACTION-MANIFEST.md - what was extracted and why
    • INTEGRATION-GUIDE.md - how to integrate each component
    • DELETED-SUMMARY.md - what was NOT applicable (with reasons)
  • Updated GAP-INVENTORY.md with extraction status

Critical Rules:

  1. NEVER extract without pre-mortem applicability analysis
  2. ALWAYS delete more than you keep (80%+ deletion rate is healthy)
  3. Document every deletion decision for future reference
  4. Map extractions to specific gaps they fill

Phase 6: Delivery and Storage

Agent: delivery-agent Purpose: Package and preserve intelligence

Process:

  1. Organize outputs in dated folder structure:
    {target}-recon-{date}/
      EXECUTIVE-SUMMARY.md
      MANIFEST.md
      COMPREHENSIVE-ANALYSIS.md
      COMPARISON-CHART.md
      RECOMMENDATIONS.md
      GAP-INVENTORY.md
      gap-extractions/
        EXTRACTION-MANIFEST.md
        INTEGRATION-GUIDE.md
        DELETED-SUMMARY.md
        extracted-files/
    
  2. Update memory-mcp with key findings
  3. Generate executive summary
  4. Archive raw extractions for audit trail

Outputs:

  • Complete reconnaissance package with gap extractions
  • Memory-mcp entries for future retrieval
  • Executive brief for stakeholders
  • Actionable extraction folder ready for integration

TIER 2: ESSENTIAL SECTIONS

Pattern Recognition

Different reconnaissance targets require different approaches:

Technology Evaluation Pattern

Triggers: "evaluate", "assess", "should we use", "compare frameworks" Characteristics:

  • Focus on implementation feasibility
  • Heavy repository analysis
  • Performance and scalability concerns Key Focus: Code quality, maintenance activity, community health Approach: Repository-first with paper backup for theoretical grounding

Competitive Analysis Pattern

Triggers: "competitor", "market position", "alternative to" Characteristics:

  • Product feature comparison
  • Pricing and positioning analysis
  • Strategic differentiation needs Key Focus: Feature gaps, market positioning, integration capabilities Approach: Product documentation and public materials analysis

Academic Extraction Pattern

Triggers: "paper", "research", "implementation from", "replicate" Characteristics:

  • Method and algorithm focus
  • Code availability critical
  • Reproducibility concerns Key Focus: Implementation details, performance claims, code artifacts Approach: Paper-first with repository search for implementations

Market Landscape Pattern

Triggers: "landscape", "players in", "market for", "opportunity" Characteristics:

  • Broad coverage over depth
  • Trend identification
  • Segment analysis Key Focus: Market size, key players, emerging trends, gaps Approach: Multi-source sweep with structured aggregation

Advanced Techniques

Multi-Model Reconnaissance

Route different extraction tasks to optimal models:

  • Gemini: Large repository analysis (1M context), Google Search grounding
  • Codex: Automated code extraction and pattern mining
  • Claude: Synthesis and strategic analysis
# Example multi-model routing
bash -lc "gemini --all-files 'extract architecture patterns from this repo'"
bash -lc "codex exec 'analyze dependency graph and identify risks'"

Temporal Layering

Compare sources across time to identify trends:

  1. Historical baseline (2+ years ago)
  2. Recent developments (6-24 months)
  3. Current state (last 6 months)
  4. Emerging signals (preprints, announcements)

Confidence Cascading

Layer confidence based on source quality:

witnessed:code-analysis     -> conf:0.95
witnessed:doc-extraction    -> conf:0.90
reported:paper-claims       -> conf:0.85
reported:blog-posts         -> conf:0.70
inferred:trend-analysis     -> conf:0.65

Common Anti-Patterns

Anti-PatternProblemSolution
Single-Source RelianceBiased or incomplete viewAlways cross-reference 3+ sources
Recency BlindnessOutdated recommendationsCheck last-commit/publish dates prominently
Claim InflationOverstating source confidenceApply strict ceiling discipline
Feature List SyndromeListing without analysisAlways include strategic implications
Extraction Without SynthesisRaw data dumpsRequire actionable conclusions
Scope CreepUnbounded reconnaissanceTime-box and scope strictly upfront

Practical Guidelines

Full vs. Quick Reconnaissance

AspectQuick Recon (2-4h)Full Recon (1-2 days)
Sources3-5 primary10-20 comprehensive
DepthSurface extractionDeep analysis
OutputSingle summaryFull package
Use WhenTime-sensitive decisionsStrategic planning

Quality Checkpoints

  • Every claim has source link
  • Confidence ceilings applied consistently
  • Date stamps on all documents
  • Recommendations are actionable
  • Gaps explicitly identified

Trade-offs

  • Breadth vs. Depth: Quick decisions need breadth; implementation needs depth
  • Speed vs. Accuracy: Higher confidence requires more cross-referencing
  • Comprehensiveness vs. Actionability: Too much data obscures recommendations

TIER 3: INTEGRATION SECTIONS

Cross-Skill Coordination

Upstream Skills (provide input)

SkillWhen Used BeforeWhat It Provides
intent-analyzerAlwaysClarified objectives and constraints
prompt-architectComplex requestsOptimized reconnaissance brief

Downstream Skills (use output)

SkillWhen Used AfterWhat It Does
literature-synthesisMulti-paper reconSynthesizes extracted papers
research-driven-planningTech evaluationPlans implementation from findings
decision-frameworkStrategic reconStructures decision from intelligence

Parallel Skills (work together)

SkillWhen Co-invokedCoordination
code-extractorRepository reconParallel extraction tasks
web-researcherMarket reconParallel web searches

MCP Requirements

Required MCPs

MCPPurposeWhy Needed
memory-mcpStore findingsCross-session retrieval of reconnaissance
sequential-thinkingComplex analysisMulti-step reasoning chains

Optional MCPs

MCPPurposeWhen to Enable
playwrightWeb scrapingProduct page extraction
github-mcpRepo analysisDeep repository intelligence

MCP Tagging Protocol

{
  "WHO": "reconnaissance-{session_id}",
  "WHEN": "ISO8601 timestamp",
  "PROJECT": "target-domain",
  "WHY": "reconnaissance|competitive-analysis|tech-eval"
}

Input/Output Contracts

inputs:
  target: string           # Required: domain, technology, or competitor name
  questions: list[string]  # Required: specific questions to answer
  depth: enum[quick, full] # Optional: default 'quick'
  sources: list[string]    # Optional: pre-specified sources to include
  constraints:
    time_limit: string     # Optional: e.g., "4 hours"
    exclusions: list[string] # Optional: sources to skip

outputs:
  reconnaissance_package:
    manifest: file         # Source inventory with metadata
    extractions: list[file] # Per-source structured extractions
    analysis: file         # COMPREHENSIVE-ANALYSIS.md
    comparison: file       # COMPARISON-CHART.md (if multi-source)
    recommendations: file  # Actionable next steps
    executive_summary: string # Brief for stakeholders

  memory_entries:
    key_findings: list     # Stored to memory-mcp
    source_index: object   # For future retrieval

  metadata:
    confidence: float      # Overall confidence (0.0-1.0)
    ceiling: string        # Confidence ceiling type
    duration: string       # Actual time spent
    gaps: list[string]     # Identified information gaps

Recursive Improvement

Self-Application

reconnaissance.improve(reconnaissance)

Improvement Dimensions:

  1. Extraction template coverage
  2. Source discovery efficiency
  3. Synthesis quality metrics
  4. Recommendation actionability

Eval Harness Integration

  • Track source discovery rate
  • Measure extraction completeness
  • Assess recommendation adoption
  • Monitor confidence calibration

Memory Namespace

reconnaissance:{project}:{date}:{type}

TIER 4: CLOSURE SECTIONS

Examples

Example 1: Technology Evaluation

User: "Should we use FATE-LLM for our federated learning project?"

Task("reconnaissance", {
  target: "FATE-LLM",
  questions: [
    "What are the core capabilities?",
    "How active is development?",
    "What are the integration requirements?",
    "Are there production deployments?"
  ],
  depth: "full"
})

Output:
- FATE-LLM MANIFEST.md with architecture analysis
- Dependency and compatibility matrix
- Community health metrics
- Recommendation: Suitable for X, concerns about Y

Example 2: Competitive Analysis

User: "Map the edge inference market for our fog-compute pitch"

Task("reconnaissance", {
  target: "edge inference platforms",
  questions: [
    "Who are the major players?",
    "What are the pricing models?",
    "Where are the gaps we can fill?"
  ],
  depth: "quick"
})

Output:
- Player inventory with positioning
- Feature comparison matrix
- Gap analysis with opportunity sizing
- Pitch angle recommendations

Example 3: Paper Extraction

User: "Extract implementation patterns from these 3 federated learning papers"

Task("reconnaissance", {
  target: "federated learning papers",
  sources: ["arXiv:2504.00407", "arXiv:2411.16086", "arXiv:2503.18986"],
  questions: [
    "What are the key algorithms?",
    "Is code available?",
    "What are the performance claims?"
  ],
  depth: "full"
})

Output:
- Per-paper extraction documents
- Algorithm comparison chart
- Code availability matrix
- Integration roadmap for fog-compute

Troubleshooting

ProblemCauseSolution
No repositories foundNarrow search termsBroaden keywords, try alternative names
Conflicting claimsMultiple sources disagreeNote conflict explicitly, investigate root cause
Stale sources onlyDomain is new or nicheExpand to preprints, conference proceedings
Extraction too shallowTime constraintsPrioritize highest-impact sources
Recommendations unclearInsufficient synthesisRe-run synthesis phase with explicit decision criteria
Memory storage failsMCP not configuredVerify memory-mcp connection, use fallback file storage

Conclusion

Reconnaissance transforms raw information into strategic advantage through systematic discovery, structured extraction, and evidence-grounded synthesis. The skill's value lies not in information gathering alone, but in producing decision-ready intelligence with explicit confidence bounds.

Key success factors:

  1. Scope strictly before starting
  2. Diversify sources systematically
  3. Apply consistent extraction templates
  4. Synthesize with confidence ceilings
  5. Deliver actionable recommendations

Completion Verification

  • Scope defined with clear questions and constraints
  • Sources diversified across 3+ types (repos, papers, products)
  • Structured extractions completed for all sources
  • Cross-reference matrix identifies conflicts and gaps
  • Synthesis includes confidence ceilings on all claims
  • Recommendations are specific and actionable
  • Memory-mcp entries created for key findings
  • Executive summary provides decision-ready brief
  • All outputs date-stamped

Confidence: 0.85 (ceiling: research 0.85) - skill definition based on observed reconnaissance patterns and skill-forge requirements.

GitHub 仓库

DNYoussef/context-cascade
路径: skills/research/reconnaissance

相关推荐技能

sglang

SGLang是一个专为LLM设计的高性能推理框架,特别适用于需要结构化输出的场景。它通过RadixAttention前缀缓存技术,在处理JSON、正则表达式、工具调用等具有重复前缀的复杂工作流时,能实现极速生成。如果你正在构建智能体或多轮对话系统,并追求远超vLLM的推理性能,SGLang是理想选择。

查看技能

evaluating-llms-harness

测试

该Skill通过60+个学术基准测试(如MMLU、GSM8K等)评估大语言模型质量,适用于模型对比、学术研究及训练进度追踪。它支持HuggingFace、vLLM和API接口,被EleutherAI等行业领先机构广泛采用。开发者可通过简单命令行快速对模型进行多任务批量评估。

查看技能

llamaguard

其他

LlamaGuard是Meta推出的7-8B参数内容审核模型,专门用于过滤LLM的输入和输出内容。它能检测六大安全风险类别(暴力/仇恨、性内容、武器、违禁品、自残、犯罪计划),准确率达94-95%。开发者可通过HuggingFace、vLLM或Sagemaker快速部署,并能与NeMo Guardrails集成实现自动化安全防护。

查看技能

langchain

LangChain是一个用于构建LLM应用程序的框架,支持智能体、链和RAG应用开发。它提供多模型提供商支持、500+工具集成、记忆管理和向量检索等核心功能。开发者可用它快速构建聊天机器人、问答系统和自主代理,适用于从原型验证到生产部署的全流程。

查看技能