โ† Back to Skills

performance-optimization

thkt
Updated Yesterday
26 views
3
3
View on GitHub
Developmentreactdata

About

This Claude Skill provides data-driven frontend performance optimization expertise for React applications and web performance issues. It helps developers address slow pages, rendering problems, bundle size optimization, and Core Web Vitals improvement using measurement-based techniques. Use this skill when working on performance optimization, memoization, code splitting, or analyzing Web Vitals metrics like LCP, FID, and CLS.

Documentation

Performance Optimization - Data-Driven Frontend Optimization

๐ŸŽฏ Core Philosophy

"Premature optimization is the root of all evil" - Donald Knuth

Optimize after measuring. Make decisions based on data, not feelings.

What This Skill Provides

  1. Web Vitals-Based Measurement - LCP, FID, CLS improvement techniques
  2. React Optimization Patterns - Reducing re-renders, proper memoization usage
  3. Bundle Optimization - Code splitting, Tree shaking, lazy loading
  4. Measurement Tools - Chrome DevTools, Lighthouse, React DevTools

๐Ÿ“š Section-Based Content

This skill is organized into 3 specialized sections for efficient context usage:

๐Ÿ” Section 1: Web Vitals Optimization

File: references/web-vitals.md Tokens: ~500 Focus: Google's Core Web Vitals (LCP, FID, CLS)

Covers:

  • Understanding Core Web Vitals metrics
  • Improving LCP (Largest Contentful Paint)
  • Reducing FID (First Input Delay)
  • Preventing CLS (Cumulative Layout Shift)
  • Chrome DevTools profiling

When to load: User mentions LCP, FID, CLS, Web Vitals, loading speed, layout shifts


โš›๏ธ Section 2: React Optimization

File: references/react-optimization.md Tokens: ~800 Focus: React-specific performance patterns

Covers:

  • React.memo for component memoization
  • useMemo for computation caching
  • useCallback for function memoization
  • List virtualization
  • State management optimization
  • React DevTools Profiler

When to load: User mentions re-renders, React performance, useMemo, useCallback, React.memo


๐Ÿ“ฆ Section 3: Bundle Optimization

File: references/bundle-optimization.md Tokens: ~600 Focus: Bundle size reduction and code splitting

Covers:

  • Code splitting patterns
  • Tree shaking techniques
  • Image optimization
  • Bundle size measurement
  • Dynamic imports
  • Lazy loading strategies

When to load: User mentions bundle size, code splitting, lazy loading, tree shaking


๐Ÿ”„ How Section Loading Works

Efficient Context Usage

// Before (Monolithic): Always load all 3000 tokens
User: "LCPใŒ้…ใ„"
โ†’ Load entire skill.md (3000 tokens)

// After (Section-based): Load only relevant section
User: "LCPใŒ้…ใ„"
โ†’ Load skill.md metadata (200 tokens)
โ†’ Detect "LCP" keyword โ†’ Match to web-vitals section
โ†’ Load references/web-vitals.md (500 tokens)
โ†’ Total: 700 tokens (77% reduction)

Loading Strategy

  1. Default: Load metadata + Core Philosophy (~200 tokens)
  2. Keyword match: Load specific section (~500-800 tokens)
  3. Multiple keywords: Load multiple sections if needed
  4. Fallback: If no specific match, suggest relevant section

๐Ÿ’ก Usage Examples

Example 1: LCP Optimization

User: "ใƒšใƒผใ‚ธใฎ่ชญใฟ่พผใฟใŒ้…ใ„ใ€‚LCPใ‚’ๆ”นๅ–„ใ—ใŸใ„"

Claude loads:
โœ“ skill.md metadata (200 tokens)
โœ“ references/web-vitals.md (500 tokens)
Total: 700 tokens

Provides:
- LCP measurement techniques
- Priority loading strategies
- Preloading techniques
- Image optimization for LCP

Example 2: React Re-rendering Issues

User: "ใ‚ณใƒณใƒใƒผใƒใƒณใƒˆใŒไธ่ฆใซๅ†ใƒฌใƒณใƒ€ใƒชใƒณใ‚ฐใ•ใ‚Œใ‚‹"

Claude loads:
โœ“ skill.md metadata (200 tokens)
โœ“ references/react-optimization.md (800 tokens)
Total: 1000 tokens

Provides:
- React.memo usage
- useMemo patterns
- React DevTools profiling
- State separation strategies

Example 3: Bundle Size Reduction

User: "ใƒใƒณใƒ‰ใƒซใ‚ตใ‚คใ‚บใŒๅคงใใ™ใŽใ‚‹"

Claude loads:
โœ“ skill.md metadata (200 tokens)
โœ“ references/bundle-optimization.md (600 tokens)
Total: 800 tokens

Provides:
- Code splitting techniques
- Tree shaking patterns
- Bundle analyzer usage
- Dynamic import strategies

๐ŸŽฏ Optimization Priorities

  1. Measure First - Always use Lighthouse, Chrome DevTools before optimizing
  2. User-Centric Metrics - Focus on Web Vitals (LCP, FID, CLS)
  3. Progressive Enhancement - Start simple, optimize based on data
  4. Avoid Premature Optimization - Only optimize problematic areas

โœจ Key Takeaways

  1. Measure First - Always measure before optimizing
  2. User-Centric Metrics - Use Web Vitals as baseline
  3. Section-based Learning - Load only relevant knowledge
  4. Data-Driven Decisions - Optimize based on profiling data

Remember: "Make it work, make it right, make it fast - in that order"

For detailed information, see the specific section files listed above.

Quick Install

/plugin add https://github.com/thkt/claude-config/tree/main/performance-optimization

Copy and paste this command in Claude Code to install this skill

GitHub ไป“ๅบ“

thkt/claude-config
Path: skills/performance-optimization

Related Skills

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill

csv-data-summarizer

Meta

This skill automatically analyzes CSV files to generate comprehensive statistical summaries and visualizations using Python's pandas and matplotlib/seaborn. It should be triggered whenever a user uploads or references CSV data without prompting for analysis preferences. The tool provides immediate insights into data structure, quality, and patterns through automated analysis and visualization.

View skill

hybrid-cloud-networking

Meta

This skill configures secure hybrid cloud networking between on-premises infrastructure and cloud platforms like AWS, Azure, and GCP. Use it when connecting data centers to the cloud, building hybrid architectures, or implementing secure cross-premises connectivity. It supports key capabilities such as VPNs and dedicated connections like AWS Direct Connect for high-performance, reliable setups.

View skill

llamaindex

Meta

LlamaIndex is a data framework for building RAG-powered LLM applications, specializing in document ingestion, indexing, and querying. It provides key features like vector indices, query engines, and agents, and supports over 300 data connectors. Use it for document Q&A, chatbots, and knowledge retrieval when building data-centric applications.

View skill