Back to Skills

moai-project-language-initializer

modu-ai
Updated Yesterday
21 views
424
78
424
View on GitHub
Otheraiautomation

About

This skill handles comprehensive project initialization workflows by managing language selection, agent prompt configuration, and user/team setup. It extracts complex batched question patterns into a reusable component that reduces user interactions while maintaining full functionality. Use this skill when you need to streamline project setup processes involving multiple configuration parameters.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/modu-ai/moai-adk
Git CloneAlternative
git clone https://github.com/modu-ai/moai-adk.git ~/.claude/skills/moai-project-language-initializer

Copy and paste this command in Claude Code to install this skill

Documentation

MoAI Project Language & User Initializer

This skill manages the comprehensive project initialization workflow that was previously handled in the 0-project.md command. It extracts the complex batched question patterns into a reusable, efficient skill that reduces user interactions while maintaining full functionality.

Core Responsibility

Handle all project setup workflows including:

  • Language selection (Korean, English, Japanese, Chinese)
  • Agent prompt language configuration (English vs Localized)
  • User nickname collection (max 20 chars)
  • Team mode configuration (GitHub settings, Git workflows)
  • Domain selection processes
  • Report generation settings with token cost warnings
  • MCP server configuration (Figma Access Token setup)

Usage Patterns

First-Time Project Initialization

# Complete setup workflow
Skill("moai-project-language-initializer")

# Executes: Basic Batch → Team Mode Batch (if applicable) → Report Generation → Domain Selection → MCP Configuration (if applicable)

Settings Modification

# Update specific settings
Skill("moai-project-language-initializer", mode="settings")

Team Mode Configuration

# Configure team-specific settings
Skill("moai-project-language-initializer", mode="team_setup")

Language Support Matrix

LanguageCodeConversation LanguageAgent Prompt LanguageDocumentation Language
EnglishenEnglishEnglish (recommended)English
Koreanko한국어English/Locale choice한국어
Japaneseja日本語English/Locale choice日本語
Chinesezh中文English/Locale choice中文

Team Mode Workflows

Feature Branch + PR Workflow

  • Best for: Team collaboration, code reviews, audit trails
  • Process: feature/SPEC-{ID} branch → PR review → develop merge
  • Settings: spec_git_workflow: "feature_branch"

Direct Commit to Develop Workflow

  • Best for: Prototypes, individual projects, rapid iteration
  • Process: Direct develop commits (no branches)
  • Settings: spec_git_workflow: "develop_direct"

Per-SPEC Decision Workflow

  • Best for: Flexible teams, mixed project types
  • Process: Ask user for each SPEC
  • Settings: spec_git_workflow: "per_spec"

Token Cost Management

Report Generation Costs

SettingTokens/ReportReports/CommandTotal Session TokensCost Impact
Enable50-603-5150-300Full cost
Minimal20-301-220-6080% reduction
Disable000Zero cost

Agent Prompt Language Costs

SettingLanguageToken EfficiencyCost Impact
EnglishEnglishBaselineStandard
LocalizedKorean/Japanese/Chinese15-20% more tokensHigher cost

Configuration Management

The skill automatically manages .moai/config/config.json persistence:

Basic Configuration Structure

{
  "language": {
    "conversation_language": "ko",
    "conversation_language_name": "한국어", 
    "agent_prompt_language": "localized"
  },
  "user": {
    "nickname": "GOOS",
    "selected_at": "2025-11-05T12:00:00Z"
  }
}

Team Mode Additional Configuration

{
  "github": {
    "auto_delete_branches": true,
    "spec_git_workflow": "feature_branch",
    "auto_delete_branches_rationale": "PR 병합 후 원격 브랜치 자동 정리",
    "spec_git_workflow_rationale": "SPEC마다 feature 브랜치 생성으로 팀 리뷰 가능"
  }
}

Report Generation Configuration

{
  "report_generation": {
    "enabled": true,
    "auto_create": false,
    "user_choice": "Minimal",
    "warn_user": true,
    "configured_at": "2025-11-05T12:00:00Z"
  }
}

Domain Selection Configuration

{
  "stack": {
    "selected_domains": ["frontend", "backend"],
    "domain_selection_date": "2025-11-05T12:00:00Z"
  }
}

Error Handling & Validation

Input Validation

  • Nickname: Max 20 characters, special characters allowed
  • Language selection: Must be from supported languages list
  • Domain selection: Multi-select with skip option
  • Team settings: Boolean and enum validation

Configuration Validation

  • JSON schema validation for config.json
  • Backward compatibility checks
  • Mode detection validation
  • Required field presence checks

Error Recovery

  • Graceful degradation for missing config sections
  • Default value application for invalid inputs
  • Retry mechanisms for failed batch calls
  • Rollback capability for partial configurations

Integration Points

With Alfred Commands

  • /alfred:0-project: Primary integration point
  • /alfred:1-plan: Uses domain selection for expert activation
  • /alfred:2-run: Applies language settings to sub-agent prompts
  • /alfred:3-sync: Respects report generation settings

With Other Skills

  • moai-alfred-ask-user-questions: Uses TUI survey patterns
  • moai-skill-factory: Can be invoked for skill template application
  • moai-alfred-agent-guide: Provides agent lineup based on domains

Configuration Dependencies

  • .moai/config/config.json: Primary configuration store
  • mode: Determines team vs personal workflow
  • github: Team-specific settings
  • language: Conversation and prompt language settings

Best Practices

For Users

  • Choose English for agent prompts to reduce token costs (15-20% savings)
  • Enable Minimal report generation for cost-effective operation
  • Configure team settings upfront for consistent workflow
  • Select relevant domains for expert agent activation

For Developers

  • Use batch patterns to minimize user interactions
  • Provide clear token cost warnings before expensive operations
  • Validate all inputs before persisting configuration
  • Maintain backward compatibility with existing config files

For Team Collaboration

  • Use Feature Branch + PR workflow for code review
  • Enable auto-delete branches for repository hygiene
  • Select appropriate domains for expert agent routing
  • Configure consistent language settings across team
  • Set up MCP servers with proper authentication (Figma tokens)

MCP Server Configuration

Figma Access Token Setup

When Figma MCP is detected in .claude/settings.json, guide users through token configuration:

Detection Logic

# Check if Figma MCP is configured
settings_path = Path(".claude/settings.json")
if settings_path.exists():
    settings = json.loads(settings_path.read_text())
    figma_configured = "mcpServers" in settings and "figma" in settings["mcpServers"]

Token Setup Workflow

  1. Verify Figma MCP Installation: Check for Figma in mcpServers configuration
  2. Guide Token Creation: Direct user to Figma developer portal
  3. Secure Token Storage: Configure environment variable or .env file
  4. Validation: Test Figma MCP connectivity

User Guidance Messages

🔐 Figma Access Token Setup Required

Your project has Figma MCP configured, but needs an access token:

Steps:
1. Visit: https://www.figma.com/developers/api#access-tokens
2. Create a new access token
3. Choose storage method:
   - Environment variable (recommended): export FIGMA_ACCESS_TOKEN=your_token
   - .env file: Add FIGMA_ACCESS_TOKEN=your_token to .env
   - Shell profile: Add to ~/.zshrc or ~/.bashrc

4. Restart Claude Code to activate token

Token Validation

def validate_figma_token():
    """Test Figma MCP connectivity with token"""
    # Try to access Figma files via MCP
    # Return success/failure with guidance

MCP Server Status Checking

Provide users with current MCP server status:

def check_mcp_status():
    """Check all configured MCP servers"""
    servers = {
        "context7": check_context7_mcp(),
        "figma": check_figma_mcp(),
        "playwright": check_playwright_mcp()
    }
    return servers

Implementation Notes

This skill extracts and consolidates the complex initialization logic from the original 0-project.md command (~800 lines) into a focused, reusable skill (~500 lines) while maintaining:

  • Full functionality: All original features preserved
  • UX improvements: Batch calling patterns maintained
  • Error handling: Comprehensive validation and recovery
  • Integration: Seamless compatibility with existing workflows
  • Performance: Optimized configuration management

The skill serves as a foundation for project initialization and can be extended with additional configuration patterns as needed.

GitHub Repository

modu-ai/moai-adk
Path: .claude/skills/moai-project-language-initializer
agentic-aiagentic-codingagentic-workflowclaudeclaudecodevibe-coding

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

content-collections

Meta

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill