Back to Skills

codex

iamladi
Updated Today
121 views
1
1
View on GitHub
Developmentaiautomation

About

This Claude Skill handles Codex CLI operations for code analysis, refactoring, and automated editing. It executes commands using specified models and reasoning effort levels while managing sandbox modes for security. The skill includes session resumption capabilities and requires skipping git repo checks by default.

Documentation

Codex Skill Guide

Running a Task

  1. Ask the user (via AskUserQuestion) which model to run (gpt-5-codex or gpt-5) AND which reasoning effort to use (high, medium, or low) in a single prompt with two questions.
  2. Select the sandbox mode required for the task; default to --sandbox read-only unless edits or network access are necessary.
  3. Assemble the command with the appropriate options:
    • -m, --model <MODEL>
    • --config model_reasoning_effort="<high|medium|low>"
    • --sandbox <read-only|workspace-write|danger-full-access>
    • --full-auto
    • -C, --cd <DIR>
    • --skip-git-repo-check
  4. Always use --skip-git-repo-check.
  5. When continuing a previous session, use codex exec --skip-git-repo-check resume --last via stdin. When resuming don't use any configuration flags unless explicitly requested by the user e.g. if he species the model or the reasoning effort when requesting to resume a session. Resume syntax: echo "your prompt here" | codex exec --skip-git-repo-check resume --last 2>/dev/null. All flags have to be inserted between exec and resume.
  6. IMPORTANT: By default, append 2>/dev/null to all codex exec commands to suppress thinking tokens (stderr). Only show stderr if the user explicitly requests to see thinking tokens or if debugging is needed.
  7. Run the command, capture stdout/stderr (filtered as appropriate), and summarize the outcome for the user.
  8. After Codex completes, inform the user: "You can resume this Codex session at any time by saying 'codex resume' or asking me to continue with additional analysis or changes."

Quick Reference

Use caseSandbox modeKey flags
Read-only review or analysisread-only--sandbox read-only 2>/dev/null
Apply local editsworkspace-write--sandbox workspace-write --full-auto 2>/dev/null
Permit network or broad accessdanger-full-access--sandbox danger-full-access --full-auto 2>/dev/null
Resume recent sessionInherited from originalecho "prompt" | codex exec --skip-git-repo-check resume --last 2>/dev/null (no flags allowed)
Run from another directoryMatch task needs-C <DIR> plus other flags 2>/dev/null

Following Up

  • After every codex command, immediately use AskUserQuestion to confirm next steps, collect clarifications, or decide whether to resume with codex exec resume --last.
  • When resuming, pipe the new prompt via stdin: echo "new prompt" | codex exec resume --last 2>/dev/null. The resumed session automatically uses the same model, reasoning effort, and sandbox mode from the original session.
  • Restate the chosen model, reasoning effort, and sandbox mode when proposing follow-up actions.

Error Handling

  • Stop and report failures whenever codex --version or a codex exec command exits non-zero; request direction before retrying.
  • Before you use high-impact flags (--full-auto, --sandbox danger-full-access, --skip-git-repo-check) ask the user for permission using AskUserQuestion unless it was already given.
  • When output includes warnings or partial results, summarize them and ask how to adjust using AskUserQuestion.

Quick Install

/plugin add https://github.com/iamladi/cautious-computing-machine--sdlc-plugin/tree/main/codex

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

iamladi/cautious-computing-machine--sdlc-plugin
Path: skills/codex

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill