Back to Skills

gemini

iamladi
Updated Yesterday
26 views
1
1
View on GitHub
Designdesign

About

This Claude Skill runs the Gemini CLI for large-scale code reviews, plan analysis, and processing of very large contexts exceeding 200k tokens. It is designed for comprehensive, multi-file analysis across entire codebases or documentation sets. Developers should use it for tasks requiring complex reasoning and extensive context windows.

Documentation

Gemini Skill Guide

When to Use Gemini

  • Code Review: Comprehensive code reviews across multiple files
  • Plan Review: Analyzing architectural plans, technical specifications, or project roadmaps
  • Big Context Processing: Tasks requiring >200k tokens of context (entire codebases, documentation sets)
  • Multi-file Analysis: Understanding relationships and patterns across many files

Running a Task

  1. Ask the user (via AskUserQuestion) which model to use in a single prompt. Latest models (all support thinking):

    • gemini-2.5-pro (state-of-the-art, best for complex reasoning in code/math/STEM)
    • gemini-2.5-flash (best price-performance, recommended for most tasks)
    • gemini-2.5-flash-lite (fastest, most cost-efficient, high throughput)
  2. Select the approval mode based on the task:

    • default: Prompt for approval (safest, for read-only analysis)
    • auto_edit: Auto-approve edit tools only (for code reviews with suggestions)
    • yolo: Auto-approve all tools (for trusted automated changes)
  3. Assemble the command with appropriate options:

    • -m, --model <MODEL> - Model selection
    • --approval-mode <default|auto_edit|yolo> - Control tool approval
    • -y, --yolo - Alternative to --approval-mode yolo
    • -i, --prompt-interactive "prompt" - Execute prompt and continue interactively
    • --include-directories <DIR> - Additional directories to include in workspace
    • -s, --sandbox - Run in sandbox mode for isolation
  4. Default to --approval-mode default for read-only analysis tasks unless edits are necessary.

  5. Run the command and capture the output. For non-interactive mode, use positional arguments:

    gemini -m gemini-2.5-flash --approval-mode default "Review this codebase for security issues"
    
  6. For interactive sessions with an initial prompt:

    gemini -m gemini-2.5-flash -i "Review the authentication system" --approval-mode auto_edit
    
  7. After Gemini completes, inform the user: "The Gemini analysis is complete. You can start a new Gemini session for follow-up analysis or continue exploring the findings."

Quick Reference

Use caseApproval modeKey flags
Code review (read-only)default-m gemini-2.5-flash --approval-mode default
Code review with suggestionsauto_edit-m gemini-2.5-flash --approval-mode auto_edit
Big context analysisdefault-m gemini-2.5-pro --approval-mode default
Plan/architecture reviewdefault-m gemini-2.5-pro --approval-mode default
Automated refactoringyolo or -y-m gemini-2.5-flash --approval-mode yolo
High-throughput tasksdefault-m gemini-2.5-flash-lite --approval-mode default
Multi-directory analysisDepends on task--include-directories <DIR1> --include-directories <DIR2>
Interactive with initial promptMatch task needs-i "prompt" --approval-mode <mode>

Model Selection Guide

ModelBest forContext window
gemini-2.5-proState-of-the-art reasoning, complex code/math/STEM problems1M input / 65k output
gemini-2.5-flashBest price-performance, large-scale processing, agentic tasks1M input / 65k output
gemini-2.5-flash-liteFastest & most cost-efficient, high throughput1M input / 65k output

All Gemini 2.5 models support thinking capabilities for enhanced reasoning and accuracy.

Common Use Cases

Code Review

gemini -m gemini-2.5-flash --approval-mode default \
  "Perform a comprehensive code review focusing on:
   1. Security vulnerabilities
   2. Performance issues
   3. Code quality and maintainability
   4. Best practices violations"

Plan Review

gemini -m gemini-2.5-pro --approval-mode default \
  "Review this architectural plan for:
   1. Scalability concerns
   2. Missing components
   3. Integration challenges
   4. Alternative approaches"

Big Context Analysis

gemini -m gemini-2.5-pro --approval-mode default \
  "Analyze the entire codebase to understand:
   1. Overall architecture
   2. Key patterns and conventions
   3. Potential technical debt
   4. Refactoring opportunities"

Following Up

  • Gemini CLI sessions are typically one-shot or interactive. Unlike Codex, there's no built-in resume functionality.
  • For follow-up analysis, start a new Gemini session with context from previous findings.
  • When proposing follow-up actions, restate the chosen model and approval mode.
  • Use AskUserQuestion after each Gemini command to confirm next steps or gather clarifications.

Error Handling

  • Stop and report failures whenever gemini --version or a Gemini command exits non-zero.
  • Request direction before retrying failed commands.
  • Before using high-impact flags (--approval-mode yolo, -y, --sandbox), ask the user for permission using AskUserQuestion unless already granted.
  • When output includes warnings or partial results, summarize them and ask how to adjust using AskUserQuestion.

Tips for Large Context Processing

  1. Be specific: Provide clear, structured prompts for what to analyze
  2. Use include-directories: Explicitly specify all relevant directories
  3. Choose the right model:
    • Use gemini-2.5-pro for complex reasoning and maximum analysis depth
    • Use gemini-2.5-flash for most code reviews (best price-performance)
    • Use gemini-2.5-flash-lite for high-volume, fast processing
  4. Leverage thinking capabilities: All 2.5 models have built-in thinking for better reasoning
  5. Break down complex tasks: Even with large context, structured analysis is more effective
  6. Save findings: Ask Gemini to output structured reports that can be saved for reference

Quick Install

/plugin add https://github.com/iamladi/cautious-computing-machine--sdlc-plugin/tree/main/gemini

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

iamladi/cautious-computing-machine--sdlc-plugin
Path: skills/gemini

Related Skills

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill

Algorithmic Art Generation

Meta

This skill helps developers create algorithmic art using p5.js, focusing on generative art, computational aesthetics, and interactive visualizations. It automatically activates for topics like "generative art" or "p5.js visualization" and guides you through creating unique algorithms with features like seeded randomness, flow fields, and particle systems. Use it when you need to build reproducible, code-driven artistic patterns.

View skill

webapp-testing

Testing

This Claude Skill provides a Playwright-based toolkit for testing local web applications through Python scripts. It enables frontend verification, UI debugging, screenshot capture, and log viewing while managing server lifecycles. Use it for browser automation tasks but run scripts directly rather than reading their source code to avoid context pollution.

View skill

requesting-code-review

Design

This skill dispatches a code-reviewer subagent to analyze code changes against requirements before proceeding. It should be used after completing tasks, implementing major features, or before merging to main. The review helps catch issues early by comparing the current implementation with the original plan.

View skill