performance-optimization
About
This Claude Skill provides data-driven frontend performance optimization expertise for React applications and web performance issues. It helps developers address slow pages, rendering problems, bundle size optimization, and Core Web Vitals improvement using measurement-based techniques. Use this skill when working on performance optimization, memoization, code splitting, or analyzing Web Vitals metrics like LCP, FID, and CLS.
Documentation
Performance Optimization - Data-Driven Frontend Optimization
๐ฏ Core Philosophy
"Premature optimization is the root of all evil" - Donald Knuth
Optimize after measuring. Make decisions based on data, not feelings.
What This Skill Provides
- Web Vitals-Based Measurement - LCP, FID, CLS improvement techniques
- React Optimization Patterns - Reducing re-renders, proper memoization usage
- Bundle Optimization - Code splitting, Tree shaking, lazy loading
- Measurement Tools - Chrome DevTools, Lighthouse, React DevTools
๐ Section-Based Content
This skill is organized into 3 specialized sections for efficient context usage:
๐ Section 1: Web Vitals Optimization
File: references/web-vitals.md
Tokens: ~500
Focus: Google's Core Web Vitals (LCP, FID, CLS)
Covers:
- Understanding Core Web Vitals metrics
- Improving LCP (Largest Contentful Paint)
- Reducing FID (First Input Delay)
- Preventing CLS (Cumulative Layout Shift)
- Chrome DevTools profiling
When to load: User mentions LCP, FID, CLS, Web Vitals, loading speed, layout shifts
โ๏ธ Section 2: React Optimization
File: references/react-optimization.md
Tokens: ~800
Focus: React-specific performance patterns
Covers:
- React.memo for component memoization
- useMemo for computation caching
- useCallback for function memoization
- List virtualization
- State management optimization
- React DevTools Profiler
When to load: User mentions re-renders, React performance, useMemo, useCallback, React.memo
๐ฆ Section 3: Bundle Optimization
File: references/bundle-optimization.md
Tokens: ~600
Focus: Bundle size reduction and code splitting
Covers:
- Code splitting patterns
- Tree shaking techniques
- Image optimization
- Bundle size measurement
- Dynamic imports
- Lazy loading strategies
When to load: User mentions bundle size, code splitting, lazy loading, tree shaking
๐ How Section Loading Works
Efficient Context Usage
// Before (Monolithic): Always load all 3000 tokens
User: "LCPใ้
ใ"
โ Load entire skill.md (3000 tokens)
// After (Section-based): Load only relevant section
User: "LCPใ้
ใ"
โ Load skill.md metadata (200 tokens)
โ Detect "LCP" keyword โ Match to web-vitals section
โ Load references/web-vitals.md (500 tokens)
โ Total: 700 tokens (77% reduction)
Loading Strategy
- Default: Load metadata + Core Philosophy (~200 tokens)
- Keyword match: Load specific section (~500-800 tokens)
- Multiple keywords: Load multiple sections if needed
- Fallback: If no specific match, suggest relevant section
๐ก Usage Examples
Example 1: LCP Optimization
User: "ใใผใธใฎ่ชญใฟ่พผใฟใ้
ใใLCPใๆนๅใใใ"
Claude loads:
โ skill.md metadata (200 tokens)
โ references/web-vitals.md (500 tokens)
Total: 700 tokens
Provides:
- LCP measurement techniques
- Priority loading strategies
- Preloading techniques
- Image optimization for LCP
Example 2: React Re-rendering Issues
User: "ใณใณใใผใใณใใไธ่ฆใซๅใฌใณใใชใณใฐใใใ"
Claude loads:
โ skill.md metadata (200 tokens)
โ references/react-optimization.md (800 tokens)
Total: 1000 tokens
Provides:
- React.memo usage
- useMemo patterns
- React DevTools profiling
- State separation strategies
Example 3: Bundle Size Reduction
User: "ใใณใใซใตใคใบใๅคงใใใใ"
Claude loads:
โ skill.md metadata (200 tokens)
โ references/bundle-optimization.md (600 tokens)
Total: 800 tokens
Provides:
- Code splitting techniques
- Tree shaking patterns
- Bundle analyzer usage
- Dynamic import strategies
๐ฏ Optimization Priorities
- Measure First - Always use Lighthouse, Chrome DevTools before optimizing
- User-Centric Metrics - Focus on Web Vitals (LCP, FID, CLS)
- Progressive Enhancement - Start simple, optimize based on data
- Avoid Premature Optimization - Only optimize problematic areas
โจ Key Takeaways
- Measure First - Always measure before optimizing
- User-Centric Metrics - Use Web Vitals as baseline
- Section-based Learning - Load only relevant knowledge
- Data-Driven Decisions - Optimize based on profiling data
Remember: "Make it work, make it right, make it fast - in that order"
For detailed information, see the specific section files listed above.
Quick Install
/plugin add https://github.com/thkt/claude-config/tree/main/performance-optimizationCopy and paste this command in Claude Code to install this skill
GitHub ไปๅบ
Related Skills
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
csv-data-summarizer
MetaThis skill automatically analyzes CSV files to generate comprehensive statistical summaries and visualizations using Python's pandas and matplotlib/seaborn. It should be triggered whenever a user uploads or references CSV data without prompting for analysis preferences. The tool provides immediate insights into data structure, quality, and patterns through automated analysis and visualization.
hybrid-cloud-networking
MetaThis skill configures secure hybrid cloud networking between on-premises infrastructure and cloud platforms like AWS, Azure, and GCP. Use it when connecting data centers to the cloud, building hybrid architectures, or implementing secure cross-premises connectivity. It supports key capabilities such as VPNs and dedicated connections like AWS Direct Connect for high-performance, reliable setups.
llamaindex
MetaLlamaIndex is a data framework for building RAG-powered LLM applications, specializing in document ingestion, indexing, and querying. It provides key features like vector indices, query engines, and agents, and supports over 300 data connectors. Use it for document Q&A, chatbots, and knowledge retrieval when building data-centric applications.
