iterate-pr
About
This Claude Skill automates the iterative process of fixing CI failures and addressing review feedback in a pull request. It continuously pushes fixes and checks statuses until all CI checks pass, handling the feedback-fix-push-wait cycle. It requires GitHub CLI and prioritizes resolving pending CI checks before proceeding with other feedback.
Quick Install
Claude Code
Recommended/plugin add https://github.com/davila7/claude-code-templatesgit clone https://github.com/davila7/claude-code-templates.git ~/.claude/skills/iterate-prCopy and paste this command in Claude Code to install this skill
Documentation
Iterate on PR Until CI Passes
Continuously iterate on the current branch until all CI checks pass and review feedback is addressed.
Requires: GitHub CLI (gh) authenticated and available.
Process
Step 1: Identify the PR
gh pr view --json number,url,headRefName,baseRefName
If no PR exists for the current branch, stop and inform the user.
Step 2: Check CI Status First
Always check CI/GitHub Actions status before looking at review feedback:
gh pr checks --json name,state,bucket,link,workflow
The bucket field categorizes state into: pass, fail, pending, skipping, or cancel.
Important: If any of these checks are still pending, wait before proceeding:
sentry/sentry-iocodecovcursor/bugbot/seer- Any linter or code analysis checks
These bots may post additional feedback comments once their checks complete. Waiting avoids duplicate work.
Step 3: Gather Review Feedback
Once CI checks have completed (or at least the bot-related checks), gather human and bot feedback:
Review Comments and Status:
gh pr view --json reviews,comments,reviewDecision
Inline Code Review Comments:
gh api repos/{owner}/{repo}/pulls/{pr_number}/comments
PR Conversation Comments (includes bot comments):
gh api repos/{owner}/{repo}/issues/{pr_number}/comments
Look for bot comments from: Sentry, Codecov, Cursor, Bugbot, Seer, and other automated tools.
Step 4: Investigate Failures
For each CI failure, get the actual logs:
# List recent runs for this branch
gh run list --branch $(git branch --show-current) --limit 5 --json databaseId,name,status,conclusion
# View failed logs for a specific run
gh run view <run-id> --log-failed
Do NOT assume what failed based on the check name alone. Always read the actual logs.
Step 5: Validate Feedback
For each piece of feedback (CI failure or review comment):
- Read the relevant code - Understand the context before making changes
- Verify the issue is real - Not all feedback is correct; reviewers and bots can be wrong
- Check if already addressed - The issue may have been fixed in a subsequent commit
- Skip invalid feedback - If the concern is not legitimate, move on
Step 6: Address Valid Issues
Make minimal, targeted code changes. Only fix what is actually broken.
Step 7: Commit and Push
git add -A
git commit -m "fix: <descriptive message of what was fixed>"
git push
Step 8: Wait for CI
Use the built-in watch functionality:
gh pr checks --watch --interval 30
This waits until all checks complete. Exit code 0 means all passed, exit code 1 means failures.
Alternatively, poll manually if you need more control:
gh pr checks --json name,state,bucket | jq '.[] | select(.bucket != "pass")'
Step 9: Repeat
Return to Step 2 if:
- Any CI checks failed
- New review feedback appeared
Continue until all checks pass and no unaddressed feedback remains.
Exit Conditions
Success:
- All CI checks are green (
bucket: pass) - No unaddressed human review feedback
Ask for Help:
- Same failure persists after 3 attempts (likely a flaky test or deeper issue)
- Review feedback requires clarification or decision from the user
- CI failure is unrelated to branch changes (infrastructure issue)
Stop Immediately:
- No PR exists for the current branch
- Branch is out of sync and needs rebase (inform user)
Tips
- Use
gh pr checks --requiredto focus only on required checks - Use
gh run view <run-id> --verboseto see all job steps, not just failures - If a check is from an external service, the
linkfield in checks JSON provides the URL to investigate
GitHub Repository
Related Skills
sglang
MetaSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
llamaguard
OtherLlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
