chrome-devtools
关于
The chrome-devtools skill enables browser automation and debugging through executable Puppeteer CLI scripts that output JSON for easy parsing. It supports web scraping, performance analysis, network monitoring, form automation, and JavaScript debugging. Use this skill when you need to automate browser interactions or extract web data programmatically within your Claude Code projects.
技能文档
Chrome DevTools Agent Skill
Browser automation via executable Puppeteer scripts. All scripts output JSON for easy parsing.
Quick Start
Installation
Step 1: Install System Dependencies (Linux/WSL only)
On Linux/WSL, Chrome requires system libraries. Install them first:
cd .claude/skills/chrome-devtools/scripts
./install-deps.sh # Auto-detects OS and installs required libs
Supports: Ubuntu, Debian, Fedora, RHEL, CentOS, Arch, Manjaro
macOS/Windows: Skip this step (dependencies bundled with Chrome)
Step 2: Install Node Dependencies
npm install # Installs puppeteer, debug, yargs
Step 3: Install ImageMagick (Optional, Recommended)
ImageMagick enables automatic screenshot compression to keep files under 5MB:
macOS:
brew install imagemagick
Ubuntu/Debian/WSL:
sudo apt-get install imagemagick
Verify:
magick -version # or: convert -version
Without ImageMagick, screenshots >5MB will not be compressed (may fail to load in Gemini/Claude).
Test
node navigate.js --url https://example.com
# Output: {"success": true, "url": "https://example.com", "title": "Example Domain"}
Available Scripts
All scripts are in .claude/skills/chrome-devtools/scripts/
Script Usage
./scripts/README.md
Core Automation
navigate.js- Navigate to URLsscreenshot.js- Capture screenshots (full page or element)click.js- Click elementsfill.js- Fill form fieldsevaluate.js- Execute JavaScript in page context
Analysis & Monitoring
snapshot.js- Extract interactive elements with metadataconsole.js- Monitor console messages/errorsnetwork.js- Track HTTP requests/responsesperformance.js- Measure Core Web Vitals + record traces
Usage Patterns
Single Command
cd .claude/skills/chrome-devtools/scripts
node screenshot.js --url https://example.com --output ./docs/screenshots/page.png
Important: Always save screenshots to ./docs/screenshots directory.
Automatic Image Compression
Screenshots are automatically compressed if they exceed 5MB to ensure compatibility with Gemini API and Claude Code (which have 5MB limits). This uses ImageMagick internally:
# Default: auto-compress if >5MB
node screenshot.js --url https://example.com --output page.png
# Custom size threshold (e.g., 3MB)
node screenshot.js --url https://example.com --output page.png --max-size 3
# Disable compression
node screenshot.js --url https://example.com --output page.png --no-compress
Compression behavior:
- PNG: Resizes to 90% + quality 85 (or 75% + quality 70 if still too large)
- JPEG: Quality 80 + progressive encoding (or quality 60 if still too large)
- Other formats: Converted to JPEG with compression
- Requires ImageMagick installed (see imagemagick skill)
Output includes compression info:
{
"success": true,
"output": "/path/to/page.png",
"compressed": true,
"originalSize": 8388608,
"size": 3145728,
"compressionRatio": "62.50%",
"url": "https://example.com"
}
Chain Commands (reuse browser)
# Keep browser open with --close false
node navigate.js --url https://example.com/login --close false
node fill.js --selector "#email" --value "[email protected]" --close false
node fill.js --selector "#password" --value "secret" --close false
node click.js --selector "button[type=submit]"
Parse JSON Output
# Extract specific fields with jq
node performance.js --url https://example.com | jq '.vitals.LCP'
# Save to file
node network.js --url https://example.com --output /tmp/requests.json
Common Workflows
Web Scraping
node evaluate.js --url https://example.com --script "
Array.from(document.querySelectorAll('.item')).map(el => ({
title: el.querySelector('h2')?.textContent,
link: el.querySelector('a')?.href
}))
" | jq '.result'
Performance Testing
PERF=$(node performance.js --url https://example.com)
LCP=$(echo $PERF | jq '.vitals.LCP')
if (( $(echo "$LCP < 2500" | bc -l) )); then
echo "✓ LCP passed: ${LCP}ms"
else
echo "✗ LCP failed: ${LCP}ms"
fi
Form Automation
node fill.js --url https://example.com --selector "#search" --value "query" --close false
node click.js --selector "button[type=submit]"
Error Monitoring
node console.js --url https://example.com --types error,warn --duration 5000 | jq '.messageCount'
Script Options
All scripts support:
--headless false- Show browser window--close false- Keep browser open for chaining--timeout 30000- Set timeout (milliseconds)--wait-until networkidle2- Wait strategy
See ./scripts/README.md for complete options.
Output Format
All scripts output JSON to stdout:
{
"success": true,
"url": "https://example.com",
... // script-specific data
}
Errors go to stderr:
{
"success": false,
"error": "Error message"
}
Finding Elements
Use snapshot.js to discover selectors:
node snapshot.js --url https://example.com | jq '.elements[] | {tagName, text, selector}'
Troubleshooting
Common Errors
"Cannot find package 'puppeteer'"
- Run:
npm installin the scripts directory
"error while loading shared libraries: libnss3.so" (Linux/WSL)
- Missing system dependencies
- Fix: Run
./install-deps.shin scripts directory - Manual install:
sudo apt-get install -y libnss3 libnspr4 libasound2t64 libatk1.0-0 libatk-bridge2.0-0 libcups2 libdrm2 libxkbcommon0 libxcomposite1 libxdamage1 libxfixes3 libxrandr2 libgbm1
"Failed to launch the browser process"
- Check system dependencies installed (Linux/WSL)
- Verify Chrome downloaded:
ls ~/.cache/puppeteer - Try:
npm rebuildthennpm install
Chrome not found
- Puppeteer auto-downloads Chrome during
npm install - If failed, manually trigger:
npx puppeteer browsers install chrome
Script Issues
Element not found
- Get snapshot first to find correct selector:
node snapshot.js --url <url>
Script hangs
- Increase timeout:
--timeout 60000 - Change wait strategy:
--wait-until loador--wait-until domcontentloaded
Blank screenshot
- Wait for page load:
--wait-until networkidle2 - Increase timeout:
--timeout 30000
Permission denied on scripts
- Make executable:
chmod +x *.sh
Screenshot too large (>5MB)
- Install ImageMagick for automatic compression
- Manually set lower threshold:
--max-size 3 - Use JPEG format instead of PNG:
--format jpeg --quality 80 - Capture specific element instead of full page:
--selector .main-content
Compression not working
- Verify ImageMagick installed:
magick -versionorconvert -version - Check file was actually compressed in output JSON:
"compressed": true - For very large pages, use
--selectorto capture only needed area
Reference Documentation
Detailed guides available in ./references/:
- CDP Domains Reference - 47 Chrome DevTools Protocol domains
- Puppeteer Quick Reference - Complete Puppeteer API patterns
- Performance Analysis Guide - Core Web Vitals optimization
Advanced Usage
Custom Scripts
Create custom scripts using shared library:
import { getBrowser, getPage, closeBrowser, outputJSON } from './lib/browser.js';
// Your automation logic
Direct CDP Access
const client = await page.createCDPSession();
await client.send('Emulation.setCPUThrottlingRate', { rate: 4 });
See reference documentation for advanced patterns and complete API coverage.
External Resources
快速安装
/plugin add https://github.com/Elios-FPT/EliosCodePracticeService/tree/main/chrome-devtools在 Claude Code 中复制并粘贴此命令以安装该技能
GitHub 仓库
相关推荐技能
sglang
元SGLang是一个专为LLM设计的高性能推理框架,特别适用于需要结构化输出的场景。它通过RadixAttention前缀缓存技术,在处理JSON、正则表达式、工具调用等具有重复前缀的复杂工作流时,能实现极速生成。如果你正在构建智能体或多轮对话系统,并追求远超vLLM的推理性能,SGLang是理想选择。
evaluating-llms-harness
测试该Skill通过60+个学术基准测试(如MMLU、GSM8K等)评估大语言模型质量,适用于模型对比、学术研究及训练进度追踪。它支持HuggingFace、vLLM和API接口,被EleutherAI等行业领先机构广泛采用。开发者可通过简单命令行快速对模型进行多任务批量评估。
langchain
元LangChain是一个用于构建LLM应用程序的框架,支持智能体、链和RAG应用开发。它提供多模型提供商支持、500+工具集成、记忆管理和向量检索等核心功能。开发者可用它快速构建聊天机器人、问答系统和自主代理,适用于从原型验证到生产部署的全流程。
generating-unit-tests
元该Skill能自动为源代码生成全面的单元测试,支持Jest、pytest、JUnit等多种测试框架。当开发者请求"生成测试"、"创建单元测试"或使用"gut"快捷指令时即可触发。它能智能识别合适框架或按指定框架生成测试用例,显著提升测试效率。
