MCP HubMCP Hub
スキル一覧に戻る

Automated Subdomain Enumeration

macaugh
更新日 Today
168 閲覧
2
2
GitHubで表示
その他aiautomation

について

このスキルは、パッシブレコナイサンスとDNSクエリやブルートフォースなどのアクティブな技術の両方を使用して、サブドメイン発見を自動化します。これは、攻撃対象領域をマッピングし、潜在的に脆弱なサブドメインを特定するための初期偵察を目的としています。実装では、包括的な列挙のためにbash、python、goを含む複数の言語をサポートしています。

クイックインストール

Claude Code

推奨
プラグインコマンド推奨
/plugin add https://github.com/macaugh/super-rouge-hunter-skills
Git クローン代替
git clone https://github.com/macaugh/super-rouge-hunter-skills.git ~/.claude/skills/Automated Subdomain Enumeration

このコマンドをClaude Codeにコピー&ペーストしてスキルをインストールします

ドキュメント

Automated Subdomain Enumeration

Overview

Subdomain discovery is a critical first step in reconnaissance. Forgotten subdomains often contain vulnerabilities, outdated software, or misconfigurations that attackers can exploit. A systematic approach combines multiple data sources and techniques to build a comprehensive subdomain list.

Core principle: Combine passive reconnaissance (non-intrusive) with active enumeration (DNS queries, brute-forcing) to maximize coverage while respecting scope and legal boundaries.

When to Use

Use this skill when:

  • Starting reconnaissance on a new target domain
  • Building an inventory of an organization's attack surface
  • Hunting for forgotten or shadow IT assets
  • Looking for development, staging, or test environments
  • Preparing for comprehensive vulnerability assessment

Don't use when:

  • Outside authorized scope (get written permission first)
  • Rate limiting or WAF might trigger alerts prematurely
  • Target has strict testing windows you must respect

The Multi-Phase Approach

Phase 1: Passive Reconnaissance

Goal: Gather subdomains without directly touching target infrastructure.

Techniques:

  1. Certificate Transparency Logs

    # Use crt.sh or similar CT log search
    curl -s "https://crt.sh/?q=%25.target.com&output=json" | \
      jq -r '.[].name_value' | \
      sed 's/\*\.//g' | \
      sort -u > ct_subdomains.txt
    
  2. Search Engine Dorking

    # Google dorks for subdomains
    # site:target.com -www
    # Use tools like subfinder, amass with passive sources
    subfinder -d target.com -silent -all -o passive_subdomains.txt
    
  3. DNS Aggregators

    • SecurityTrails
    • VirusTotal
    • DNSdumpster
    • Shodan
  4. GitHub/GitLab Code Search

    # Search for domain mentions in code
    # "target.com" site:github.com
    # Look for: config files, API endpoints, documentation
    
  5. Web Archives

    # Wayback Machine API
    curl -s "http://web.archive.org/cdx/search/cdx?url=*.target.com/*&output=json&fl=original&collapse=urlkey" | \
      jq -r '.[] | .[0]' | \
      grep -oP '(?<=://)[^/]*' | \
      sort -u >> archive_subdomains.txt
    

Phase 2: Active Enumeration

Goal: Actively query DNS infrastructure to discover additional subdomains.

Techniques:

  1. Brute Force with Wordlists

    # Use tools like puredns, massdns, or dnsx
    puredns bruteforce wordlist.txt target.com \
      --resolvers resolvers.txt \
      --write active_subdomains.txt
    
  2. DNS Zone Transfers (if misconfigured)

    # Test for zone transfer vulnerability
    dig axfr @ns1.target.com target.com
    
  3. Reverse DNS Lookups

    # Get IP ranges, perform reverse lookups
    # Useful for finding additional subdomains on same infrastructure
    
  4. Permutation Scanning

    # Generate permutations of found subdomains
    # Example: dev.api.target.com, staging.api.target.com
    altdns -i subdomains.txt -o permuted.txt -w words.txt
    dnsx -l permuted.txt -o verified_permuted.txt
    

Phase 3: Validation and Enrichment

Goal: Verify discovered subdomains are live and gather additional context.

  1. Live Host Detection

    # Use httpx to probe for web services
    cat all_subdomains.txt | httpx -silent -o live_hosts.txt
    
    # Get status codes, titles, tech stack
    httpx -l all_subdomains.txt -title -tech-detect -status-code -o enriched.txt
    
  2. Port Scanning

    # Identify services on discovered hosts
    naabu -list live_hosts.txt -p 80,443,8080,8443 -o ports.txt
    
  3. Screenshot Capture

    # Visual reconnaissance of web interfaces
    gowitness file -f live_hosts.txt -P screenshots/
    
  4. Technology Fingerprinting

    # Identify web technologies, frameworks, CMS
    wappalyzer -l live_hosts.txt -o tech_stack.json
    

Phase 4: Organization and Analysis

Goal: Structure discovered data for efficient analysis and next steps.

  1. Categorize Subdomains

    • Production vs. development/staging/test
    • Internal-facing vs. external-facing
    • By technology stack or function
    • By sensitivity/criticality
  2. Prioritize Targets

    • High-value targets: admin, api, dev, staging, test, vpn, mail
    • Outdated software versions
    • Unusual ports or services
    • Error messages or debug pages
  3. Document Findings

    # Target: target.com
    
    ## Statistics
    - Total subdomains discovered: X
    - Live hosts: Y
    - Unique IP addresses: Z
    - Technologies identified: [list]
    
    ## High-Priority Targets
    1. dev.api.target.com - Swagger UI exposed, no auth
    2. old-admin.target.com - PHP 5.6, potential RCE
    3. test-payment.target.com - Test environment with prod data?
    
    ## Next Steps
    - Deep enumeration of api.target.com endpoints
    - Vulnerability scan of outdated PHP instance
    - Manual inspection of test environment
    

Tool Recommendations

All-in-one suites:

  • Amass (comprehensive, integrates many sources)
  • Subfinder (fast passive enumeration)
  • Assetfinder (simple, effective)

Specialized tools:

  • Puredns (DNS validation and brute-forcing)
  • DNSx (DNS toolkit with various features)
  • Httpx (HTTP probing and enrichment)
  • Naabu (fast port scanner)
  • Gowitness (screenshot capture)

Setup example:

# Install Go tools
go install github.com/projectdiscovery/subfinder/v2/cmd/subfinder@latest
go install github.com/projectdiscovery/dnsx/cmd/dnsx@latest
go install github.com/projectdiscovery/httpx/cmd/httpx@latest
go install github.com/projectdiscovery/naabu/v2/cmd/naabu@latest

# Install other dependencies
pip install altdns

Automation Script Template

#!/bin/bash
# automated_subdomain_enum.sh

DOMAIN=$1
OUTPUT_DIR="${DOMAIN}_recon_$(date +%Y%m%d_%H%M%S)"
mkdir -p "$OUTPUT_DIR"

echo "[*] Starting subdomain enumeration for $DOMAIN"

# Phase 1: Passive
echo "[*] Phase 1: Passive reconnaissance"
subfinder -d "$DOMAIN" -all -silent -o "$OUTPUT_DIR/subfinder.txt"
assetfinder --subs-only "$DOMAIN" > "$OUTPUT_DIR/assetfinder.txt"
curl -s "https://crt.sh/?q=%25.$DOMAIN&output=json" | jq -r '.[].name_value' | sed 's/\*\.//g' | sort -u > "$OUTPUT_DIR/crtsh.txt"

# Combine and deduplicate
cat "$OUTPUT_DIR"/{subfinder,assetfinder,crtsh}.txt | sort -u > "$OUTPUT_DIR/passive_all.txt"
echo "[+] Found $(wc -l < "$OUTPUT_DIR/passive_all.txt") unique subdomains (passive)"

# Phase 2: Active (optional, comment out if too noisy)
# echo "[*] Phase 2: Active enumeration"
# puredns bruteforce /path/to/wordlist.txt "$DOMAIN" -r /path/to/resolvers.txt -w "$OUTPUT_DIR/bruteforce.txt"

# Phase 3: Validation
echo "[*] Phase 3: Validating and probing subdomains"
cat "$OUTPUT_DIR/passive_all.txt" | dnsx -silent -o "$OUTPUT_DIR/validated.txt"
httpx -l "$OUTPUT_DIR/validated.txt" -title -status-code -tech-detect -silent -o "$OUTPUT_DIR/live_hosts.txt"

echo "[+] Found $(wc -l < "$OUTPUT_DIR/live_hosts.txt") live hosts"
echo "[*] Results saved to $OUTPUT_DIR/"

Legal and Ethical Considerations

CRITICAL: Always follow these rules:

  1. Get Written Authorization

    • Never test targets without explicit permission
    • Scope must be clearly defined in writing
    • Understand what techniques are permitted
  2. Respect Rate Limits

    • Don't overwhelm target DNS servers
    • Use reasonable delays between requests
    • Consider impact on production systems
  3. Handle Data Responsibly

    • Discovered subdomains may reveal sensitive information
    • Don't publicly disclose findings without permission
    • Follow responsible disclosure practices
  4. Document Everything

    • Keep records of authorization
    • Log all activities with timestamps
    • Document findings systematically

Common Pitfalls

MistakeImpactSolution
Only using one data sourceMiss many subdomainsCombine multiple techniques
Skipping validationFalse positives waste timeAlways verify with DNS queries
Too aggressive active scanningDetection, blocking, legal issuesStart passive, escalate carefully
Not categorizing resultsInefficient analysisOrganize findings by priority
Ignoring out-of-scope domainsLegal/ethical violationsStrictly adhere to authorized scope

Integration with Other Skills

This skill works with:

  • skills/reconnaissance/web-app-recon - Next step after finding live web apps
  • skills/reconnaissance/service-fingerprinting - Identify services on discovered hosts
  • skills/automation/* - Automate the full recon pipeline
  • skills/documentation/* - Organize findings in knowledge base

Success Metrics

A successful subdomain enumeration should:

  • Discover both obvious and hidden subdomains
  • Identify live hosts and their technologies
  • Categorize findings by risk/priority
  • Provide actionable next steps
  • Complete within scope and authorization
  • Document findings for future reference

References and Further Reading

  • OWASP Testing Guide: Information Gathering
  • "Bug Bounty Bootcamp" by Vickie Li (Chapter on Reconnaissance)
  • ProjectDiscovery blog on subdomain enumeration
  • DNS RFC standards for understanding DNS behavior

GitHub リポジトリ

macaugh/super-rouge-hunter-skills
パス: skills/reconnaissance/automated-subdomain-enum

関連スキル

content-collections

メタ

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

スキルを見る

evaluating-llms-harness

テスト

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

スキルを見る

sglang

メタ

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

スキルを見る

cloudflare-turnstile

メタ

This skill provides comprehensive guidance for implementing Cloudflare Turnstile as a CAPTCHA-alternative bot protection system. It covers integration for forms, login pages, API endpoints, and frameworks like React/Next.js/Hono, while handling invisible challenges that maintain user experience. Use it when migrating from reCAPTCHA, debugging error codes, or implementing token validation and E2E tests.

スキルを見る