configuring
About
This skill provides unified environment variable loading across AI coding platforms like Claude Code, Claude.ai, and Codex. It automatically detects your environment and loads secrets/config from platform-specific sources and .env files. Use it to manage API keys and configuration consistently across different development environments.
Quick Install
Claude Code
Recommended/plugin add https://github.com/majiayu000/claude-skill-registrygit clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/configuringCopy and paste this command in Claude Code to install this skill
Documentation
Configuring
Unified configuration management across AI coding environments. Load environment variables, secrets, and other opinionated configuration setups from any AI coding platform.
Quick Start
import sys
sys.path.insert(0, '/path/to/claude-skills') # or wherever skills are installed
from configuring import get_env, detect_environment
# Get a variable (searches all sources automatically)
token = get_env("TURSO_TOKEN", required=True)
# With default
port = get_env("PORT", default="8080")
# What environment are we in?
env = detect_environment() # "claude.ai", "claude-code-desktop", "codex", "jules", etc.
Supported Environments
| Environment | Config Sources |
|---|---|
| Claude.ai Projects | /mnt/project/*.env, /mnt/project/*-token.txt |
| Claude Code | ~/.claude/settings.json (env block), .claude/settings.json |
| OpenAI Codex | ~/.codex/config.toml, setup script → ~/.bashrc, shell_snapshots/*.sh |
| Jules | Environment settings UI, .env in repo |
| Universal | os.environ, .env, .env.local |
API Reference
# Core
get_env(key, default=None, *, required=False, validator=None) -> str | None
load_env(path) -> dict[str, str] # Load specific file
load_all(force_reload=False) -> dict # Load all sources
# Utilities
detect_environment() -> str # Current platform
mask_secret(value, show_chars=4) -> str # Safe logging
debug_info() -> dict # Troubleshooting
get_loaded_sources() -> list[str] # What was checked
Credential File Formats
.env files (KEY=value):
TURSO_TOKEN=eyJhbGciOiJFZERTQSI...
EMBEDDING_API_KEY=sk-svcacct-...
Single-value files (*-token.txt, *-key.txt):
eyJhbGciOiJFZERTQSI...
Filename becomes key: turso-token.txt → TURSO_TOKEN
Claude Code settings.json:
{
"env": {
"TURSO_TOKEN": "eyJhbGciOiJFZERTQSI..."
}
}
Priority Order
Later sources override earlier:
- OS environment variables
- Platform-specific sources (detected automatically)
.envfiles in cwd- OS environment variables (again - explicit exports always win)
Debugging
import sys
sys.path.insert(0, '/path/to/claude-skills')
from configuring import debug_info
print(debug_info())
# {'environment': 'claude.ai', 'sources': ['os.environ', 'claude.ai:/mnt/project/'], ...}
CLI:
cd /path/to/claude-skills/configuring
python scripts/getting_env.py # Show debug info
python scripts/getting_env.py TURSO_TOKEN # Get specific key
Migration from api-credentials / getting-env
Replace:
# Old (api-credentials)
from credentials import get_anthropic_api_key
key = get_anthropic_api_key()
# Old (getting-env)
from getting_env import get_env
key = get_env("ANTHROPIC_API_KEY")
# New (configuring)
import sys
sys.path.insert(0, '/path/to/claude-skills')
from configuring import get_env
key = get_env("ANTHROPIC_API_KEY", required=True)
GitHub Repository
Related Skills
sglang
MetaSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
llamaguard
OtherLlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.
