MCP HubMCP Hub
スキル一覧に戻る

patent-claims-analyzer

RobThePCGuy
更新日 Yesterday
120 閲覧
2
2
GitHubで表示
デザインaiautomationdesign

について

このスキルは、米国特許商標庁(USPTO)の基準に基づいて特許請求項を分析し、特に35 USC 112(b)に規定された先行技術基盤と明確性をチェックします。請求項の構造を自動的に検討し、用語導入の欠如などの起草上の問題を特定し、出願前の請求項を検証するためにご利用ください。主観的な表現や不適切な参照をフラグ付けするため、請求項全体にわたる自動チェックを実行します。

クイックインストール

Claude Code

推奨
プラグインコマンド推奨
/plugin add https://github.com/RobThePCGuy/Claude-Patent-Creator
Git クローン代替
git clone https://github.com/RobThePCGuy/Claude-Patent-Creator.git ~/.claude/skills/patent-claims-analyzer

このコマンドをClaude Codeにコピー&ペーストしてスキルをインストールします

ドキュメント

Patent Claims Analyzer Skill

Automated analysis of patent claims for USPTO compliance with 35 USC 112(b) requirements.

When to Use

Invoke this skill when users ask to:

  • Review patent claims for definiteness
  • Check antecedent basis in claims
  • Analyze claim structure
  • Find claim drafting issues
  • Validate claims before filing
  • Fix USPTO office action issues related to claims

What This Skill Does

Performs comprehensive automated analysis:

  1. Antecedent Basis Checking:

    • Finds terms used without prior introduction
    • Detects missing "a/an" before first use
    • Identifies improper "said/the" before first use
    • Tracks term references across claims
  2. Definiteness Analysis (35 USC 112(b)):

    • Identifies subjective/indefinite terms
    • Detects relative terms without reference
    • Finds ambiguous claim language
    • Checks for clear claim boundaries
  3. Claim Structure Validation:

    • Parses independent vs. dependent claims
    • Validates claim dependencies
    • Checks claim numbering
    • Identifies claim type (method, system, etc.)
  4. Issue Categorization:

    • Critical: Must fix before filing
    • Important: May cause rejection
    • Minor: Best practice improvements

Required Data

This skill uses the automated claims analyzer from: Location: ${CLAUDE_PLUGIN_ROOT}/python\claims_analyzer.py

How to Use

When this skill is invoked:

  1. Load the claims analyzer:

    import sys
    sys.path.insert(0, os.path.join(os.environ.get('CLAUDE_PLUGIN_ROOT', '.'), 'python'))
    from python.claims_analyzer import ClaimsAnalyzer
    
    analyzer = ClaimsAnalyzer()
    
  2. Analyze claims:

    claims_text = """
    1. A system comprising:
        a processor;
        a memory; and
        said processor configured to...
    """
    
    results = analyzer.analyze_claims(claims_text)
    
  3. Present analysis:

    • Show compliance score (0-100)
    • List issues by severity (critical, important, minor)
    • Provide MPEP citations for each issue
    • Suggest specific fixes

Analysis Output Structure

{
    "claim_count": 20,
    "independent_count": 3,
    "dependent_count": 17,
    "compliance_score": 85,  # 0-100
    "total_issues": 12,
    "critical_issues": 2,
    "important_issues": 7,
    "minor_issues": 3,
    "issues": [
        {
            "category": "antecedent_basis",
            "severity": "critical",
            "claim_number": 1,
            "term": "said processor",
            "description": "Term 'processor' used with 'said' before first introduction",
            "mpep_cite": "MPEP 2173.05(e)",
            "suggestion": "Change 'said processor' to 'the processor' or introduce with 'a processor' first"
        },
        # ... more issues
    ]
}

Common Issues Detected

  1. Antecedent Basis Errors:

    • Using "said/the" before "a/an" introduction
    • Terms appearing in dependent claims not in parent
    • Missing antecedent in claim body
  2. Definiteness Issues:

    • Subjective terms: "substantially", "about", "approximately"
    • Relative terms: "large", "small", "thin"
    • Ambiguous language: "and/or", "optionally"
  3. Structure Issues:

    • Means-plus-function without adequate structure
    • Improper claim dependencies
    • Missing preamble or transition

Presentation Format

Present analysis as:

CLAIMS ANALYSIS REPORT
======================

Summary:
- Total Claims: 20 (3 independent, 17 dependent)
- Compliance Score: 85/100
- Issues Found: 12 (2 critical, 7 important, 3 minor)

CRITICAL ISSUES (Must Fix):

[Claim 1] Antecedent Basis Error
  Issue: Term 'processor' used with 'said' before introduction
  Location: "said processor configured to..."
  MPEP: 2173.05(e)
  Fix: Change to 'the processor' or introduce with 'a processor' first

[Claim 5] Indefinite Term
  Issue: Subjective term 'substantially' without definition
  Location: "substantially similar to..."
  MPEP: 2173.05(b)
  Fix: Define 'substantially' in specification or use objective criteria

IMPORTANT ISSUES:
...

MINOR ISSUES:
...

Integration with MPEP

For each issue, the skill can:

  1. Search MPEP for relevant guidance
  2. Provide specific MPEP section citations
  3. Show examiner guidance on similar issues
  4. Suggest fixes based on USPTO practice

Tools Available

  • Read: To load claims from files
  • Bash: To run Python analyzer
  • Write: To save analysis reports

GitHub リポジトリ

RobThePCGuy/Claude-Patent-Creator
パス: skills/patent-claims-analyzer
bigqueryclaude-codeclaude-code-pluginfaissmcp-servermpep

関連スキル

content-collections

メタ

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

スキルを見る

creating-opencode-plugins

メタ

This skill provides the structure and API specifications for creating OpenCode plugins that hook into 25+ event types like commands, files, and LSP operations. It offers implementation patterns for JavaScript/TypeScript modules that intercept and extend the AI assistant's lifecycle. Use it when you need to build event-driven plugins for monitoring, custom handling, or extending OpenCode's capabilities.

スキルを見る

evaluating-llms-harness

テスト

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

スキルを見る

sglang

メタ

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

スキルを見る