code-review
关于
This skill performs automated code reviews based on Sentry's engineering practices, ideal for analyzing pull requests or code changes. It checks for security vulnerabilities, performance issues, testing gaps, and design problems using a structured checklist. Developers should use it to get consistent, actionable feedback on code quality.
快速安装
Claude Code
推荐/plugin add https://github.com/davila7/claude-code-templatesgit clone https://github.com/davila7/claude-code-templates.git ~/.claude/skills/code-review在 Claude Code 中复制并粘贴此命令以安装该技能
技能文档
Sentry Code Review
Follow these guidelines when reviewing code for Sentry projects.
Review Checklist
Identifying Problems
Look for these issues in code changes:
- Runtime errors: Potential exceptions, null pointer issues, out-of-bounds access
- Performance: Unbounded O(n²) operations, N+1 queries, unnecessary allocations
- Side effects: Unintended behavioral changes affecting other components
- Backwards compatibility: Breaking API changes without migration path
- ORM queries: Complex Django ORM with unexpected query performance
- Security vulnerabilities: Injection, XSS, access control gaps, secrets exposure
Design Assessment
- Do component interactions make logical sense?
- Does the change align with existing project architecture?
- Are there conflicts with current requirements or goals?
Test Coverage
Every PR should have appropriate test coverage:
- Functional tests for business logic
- Integration tests for component interactions
- End-to-end tests for critical user paths
Verify tests cover actual requirements and edge cases. Avoid excessive branching or looping in test code.
Long-Term Impact
Flag for senior engineer review when changes involve:
- Database schema modifications
- API contract changes
- New framework or library adoption
- Performance-critical code paths
- Security-sensitive functionality
Feedback Guidelines
Tone
- Be polite and empathetic
- Provide actionable suggestions, not vague criticism
- Phrase as questions when uncertain: "Have you considered...?"
Approval
- Approve when only minor issues remain
- Don't block PRs for stylistic preferences
- Remember: the goal is risk reduction, not perfect code
Common Patterns to Flag
Python/Django
# Bad: N+1 query
for user in users:
print(user.profile.name) # Separate query per user
# Good: Prefetch related
users = User.objects.prefetch_related('profile')
TypeScript/React
// Bad: Missing dependency in useEffect
useEffect(() => {
fetchData(userId);
}, []); // userId not in deps
// Good: Include all dependencies
useEffect(() => {
fetchData(userId);
}, [userId]);
Security
# Bad: SQL injection risk
cursor.execute(f"SELECT * FROM users WHERE id = {user_id}")
# Good: Parameterized query
cursor.execute("SELECT * FROM users WHERE id = %s", [user_id])
References
GitHub 仓库
相关推荐技能
content-collections
元Content Collections 是一个 TypeScript 优先的构建工具,可将本地 Markdown/MDX 文件转换为类型安全的数据集合。它专为构建博客、文档站和内容密集型 Vite+React 应用而设计,提供基于 Zod 的自动模式验证。该工具涵盖从 Vite 插件配置、MDX 编译到生产环境部署的完整工作流。
creating-opencode-plugins
元该Skill为开发者创建OpenCode插件提供指导,涵盖命令、文件、LSP等25+种事件类型。它详细说明了插件结构、事件API规范及JavaScript/TypeScript实现模式,帮助开发者构建事件驱动的模块。适用于需要拦截操作、扩展功能或自定义AI助手行为的插件开发场景。
evaluating-llms-harness
测试该Skill通过60+个学术基准测试(如MMLU、GSM8K等)评估大语言模型质量,适用于模型对比、学术研究及训练进度追踪。它支持HuggingFace、vLLM和API接口,被EleutherAI等行业领先机构广泛采用。开发者可通过简单命令行快速对模型进行多任务批量评估。
langchain
元LangChain是一个用于构建LLM应用程序的框架,支持智能体、链和RAG应用开发。它提供多模型提供商支持、500+工具集成、记忆管理和向量检索等核心功能。开发者可用它快速构建聊天机器人、问答系统和自主代理,适用于从原型验证到生产部署的全流程。
