← Back to Skills

moai-alfred-ask-user-questions

modu-ai
Updated Today
50 views
424
78
424
View on GitHub
Designaiautomationdesign

About

This Claude skill orchestrates interactive surveys using the AskUserQuestion tool for enterprise decision automation. It supports multi-select options, conditional branching, and error recovery for production workflows. Use it when you need to clarify requirements, make architectural decisions, handle risky operations, or manage complex multi-step user interactions.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/modu-ai/moai-adk
Git CloneAlternative
git clone https://github.com/modu-ai/moai-adk.git ~/.claude/skills/moai-alfred-ask-user-questions

Copy and paste this command in Claude Code to install this skill

Documentation

Enterprise Interactive Survey Orchestrator v4.0.0

Skill Metadata

FieldValue
Skill Namemoai-alfred-ask-user-questions
Version4.0.0 Enterprise (2025-11-12)
Core ToolAskUserQuestion (Claude Code built-in)
Auto-loadWhen Alfred detects ambiguity in requests
TierAlfred (Workflow Orchestration)
Allowed toolsAskUserQuestion
Lines of Content850+, with 10+ production examples
Progressive Disclosure3-level (quick-reference, patterns, advanced)

πŸš€ What It Does (Enterprise Context)

Purpose: Empower Alfred sub-agents to actively conduct enterprise-grade surveys for requirement clarification, architectural decisions, and complex decision automation.

Leverages Claude Code's native AskUserQuestion tool to collect explicit, structured user input that transforms vague requests into precise specifications with guaranteed UX quality across all models.

Enterprise Capabilities:

  • βœ… Single-select & multi-select option types (independent or dependent)
  • βœ… 1-4 questions per survey (cognitive load optimization)
  • βœ… 2-4 options per question with trade-off analysis
  • βœ… Automatic "Other" option for custom input validation
  • βœ… Conditional branching based on user answers
  • βœ… Error recovery and retry logic
  • βœ… Integration across all Alfred commands (Plan/Run/Sync)
  • βœ… Multi-language support (user's configured language)
  • βœ… Accessibility-first TUI design
  • βœ… Reduces ambiguity β†’ single interaction vs 3-5 iterations

🎯 When to Use (Decision Framework)

βœ… MANDATORY ASK when user intent is ambiguous:

  1. Vague noun phrases: "Add dashboard", "Refactor auth", "Improve performance"

    • Missing concrete specification or scope
  2. Missing scope definition: No specification of WHERE, WHO, WHAT, HOW, WHEN

    • Could affect 5+ files or multiple modules
  3. Multiple valid paths: β‰₯2 reasonable implementation approaches

    • Different trade-offs (speed vs quality, simple vs comprehensive)
  4. Trade-off decisions: Performance vs reliability, cost vs features

    • No single objectively best answer
  5. Risky operations: Destructive actions (delete, migrate, reset)

    • Explicit informed consent required
  6. Architectural decisions: Technology selection, API design, database choice

    • Long-term impact requires clarification

❌ DON'T ask when:

  • User explicitly specified exact requirements
  • Decision is automatic (no choices, pure routing)
  • Single obvious path exists (no alternatives)
  • Quick yes/no confirmation only (keep it brief)
  • Information already provided in conversation

🎨 Design Principles (Enterprise Standards)

Core Principle: Certainty Over Guessing

Golden Rule: When in doubt, ask the user instead of assuming.

Why:

  • βœ… User sees exactly what you'll do β†’ no surprises
  • βœ… Single interaction vs 3-5 rounds of back-and-forth
  • βœ… Fast β†’ execute with certainty
  • βœ… Reduces "vibe coding" frustration
  • βœ… Builds trust through transparency

Pattern:

Ambiguous request detected
         ↓
Call AskUserQuestion({questions: [...]})
         ↓
User selects from clear options
         ↓
Proceed with confirmed specifications

πŸ—οΈ Architecture: 3-Level Progressive Disclosure

Level 1: Quick Start (Minimal Invocation)

Single-Select Pattern:

const answer = await AskUserQuestion({
  questions: [
    {
      question: "How should we implement this?",
      header: "Approach",          // max 12 chars
      multiSelect: false,
      options: [
        {
          label: "Option 1",       // 1-5 words
          description: "What it does and why you'd pick it."
        },
        {
          label: "Option 2",
          description: "Alternative with different trade-offs."
        }
      ]
    }
  ]
});

// Returns: { "Approach": "Option 1" }

Multi-Select Pattern (independent features):

const answer = await AskUserQuestion({
  questions: [
    {
      question: "Which features should we enable?",
      header: "Features",
      multiSelect: true,
      options: [
        { label: "Feature A", description: "..." },
        { label: "Feature B", description: "..." },
        { label: "Feature C", description: "..." }
      ]
    }
  ]
});

// Returns: { "Features": ["Feature A", "Feature C"] }

Level 2: Enterprise Patterns

Batch Questions (related decisions):

const answer = await AskUserQuestion({
  questions: [
    {
      question: "Which database technology?",
      header: "Database",
      options: [
        { label: "PostgreSQL", description: "Relational, ACID, mature" },
        { label: "MongoDB", description: "Document, flexible schema" }
      ]
    },
    {
      question: "Cache strategy?",
      header: "Caching",
      options: [
        { label: "Redis", description: "In-memory, fast" },
        { label: "Memcached", description: "Distributed cache" }
      ]
    }
  ]
});

Conditional Flow (dependent decisions):

let questions = [
  {
    question: "What type of deployment?",
    header: "Deployment Type",
    options: [
      { label: "Docker", description: "Containerized" },
      { label: "Kubernetes", description: "Orchestrated" },
      { label: "Serverless", description: "Functions-as-a-Service" }
    ]
  }
];

const initialAnswer = await AskUserQuestion({ questions });

// Follow-up based on initial choice
if (initialAnswer["Deployment Type"] === "Kubernetes") {
  const kubeAnswer = await AskUserQuestion({
    questions: [
      {
        question: "Which cluster provider?",
        header: "K8s Provider",
        options: [
          { label: "AWS EKS", description: "Amazon Elastic Kubernetes" },
          { label: "GCP GKE", description: "Google Kubernetes Engine" },
          { label: "Self-Managed", description: "On-premises cluster" }
        ]
      }
    ]
  });
}

Level 3: Advanced (Error Handling & Validation)

Custom Input Validation ("Other" option):

const answer = await AskUserQuestion({
  questions: [
    {
      question: "Which framework?",
      header: "Framework",
      options: [
        { label: "React", description: "UI library" },
        { label: "Vue", description: "Progressive framework" },
        { label: "Other", description: "Custom framework or library" }
      ]
    }
  ]
});

// Validate custom input
if (answer["Framework"] === "Other" || 
    !VALID_FRAMEWORKS.includes(answer["Framework"])) {
  const customAnswer = validateCustomInput(answer["Framework"]);
  if (!customAnswer) {
    // Re-ask with guidance
    return retryWithGuidance();
  }
}

Error Recovery:

try {
  const answer = await AskUserQuestion({...});
} catch (error) {
  if (error.type === "UserCancelled") {
    // User pressed ESC - use default or abort
    return fallbackToDefault();
  } else if (error.type === "InvalidInput") {
    // Validate and retry
    return retryWithValidation();
  }
}

Risky Operation Confirmation:

const answer = await AskUserQuestion({
  questions: [
    {
      question: "This will DELETE 15 branches and merge to main. Continue?",
      header: "Destructive Op",
      options: [
        {
          label: "Proceed",
          description: "Delete branches (CANNOT BE UNDONE)"
        },
        {
          label: "Dry Run",
          description: "Show what would be deleted"
        },
        {
          label: "Cancel",
          description: "Abort entire process"
        }
      ]
    }
  ]
});

if (answer["Destructive Op"] === "Proceed") {
  // Require additional final confirmation
  const final = await AskUserQuestion({
    questions: [
      {
        question: "Type DELETE to confirm irreversible action:",
        header: "Final Confirmation"
      }
    ]
  });
  
  if (final["Final Confirmation"] === "DELETE") {
    executeDeletion();
  }
}

πŸ“‹ Key Constraints (TUI Optimization)

ConstraintReasonExample
1-4 questions maxAvoid user fatigueUse follow-up surveys instead
2-4 options per QPrevent choice overloadAvoid 5+ options (decision paralysis)
Header ≀12 charsTUI layout fit"DB Choice" not "Which Database Technology"
Label 1-5 wordsQuick scanning"PostgreSQL" not "SQL Database by PostgreSQL"
Description requiredEnables informed choiceAlways explain trade-offs
Auto "Other" optionAlways availableSystem adds automatically for custom input
No HTML/markdownPlain text TUIUse formatting sparingly
Language matchingUser experienceAlways match configured conversation_language

πŸ”„ Integration with Alfred Sub-agents

Sub-agentWhen to AskExample TriggerQuestions
spec-builder (/alfred:1-plan)SPEC title vague, scope undefined"Add feature" without specifics"Feature type?", "Scope?", "Users affected?"
tdd-implementer (/alfred:2-run)Implementation approach unclearMultiple valid implementation paths"Architecture?", "Libraries?", "Constraints?"
doc-syncer (/alfred:3-sync)Sync scope unclearFull vs partial sync decision"Full sync?", "Which files?", "Auto-commit?"
qa-validatorReview depth unclearQuick vs comprehensive check"Review level?", "Security focus?", "Performance?"

πŸŽ“ Top 10 Usage Patterns

Pattern 1: Feature Type Clarification

Trigger: "Add dashboard feature" without specifics
Question: "Which dashboard type: Analytics, Admin, or Profile?"
Outcome: Narrowed scope β†’ faster implementation

Pattern 2: Implementation Approach Selection

Trigger: Multiple valid tech choices
Question: "JWT tokens, session-based, or OAuth?"
Outcome: Locked architecture β†’ deterministic development

Pattern 3: Risky Operation Confirmation

Trigger: Destructive action (delete, migrate, reset)
Question: "This will delete X. Proceed?" with retry confirmation
Outcome: Explicit consent + audit trail

Pattern 4: Multi-Feature Selection

Trigger: "Which features to enable?"
Question: Multi-select of independent features
Outcome: Precise feature set β†’ no scope creep

Pattern 5: Sequential Conditional Decisions

Trigger: Dependent choices (Q2 depends on Q1)
Question: First survey β†’ follow-up based on answer
Outcome: Progressive narrowing β†’ precise specification

Pattern 6: Technology Stack Selection

Trigger: "Build with what stack?"
Question: Database, cache, queue, API type
Outcome: Full stack locked β†’ team alignment

Pattern 7: Performance vs Reliability

Trigger: Trade-off between conflicting goals
Question: "Optimize for speed, reliability, or cost?"
Outcome: Explicit requirements β†’ informed trade-offs

Pattern 8: Custom Input Handling

Trigger: "Other" option selected
Question: "Please describe..." with validation
Outcome: Unexpected inputs handled gracefully

Pattern 9: Experience Level Calibration

Trigger: Unclear target audience
Question: "Beginner, intermediate, or advanced?"
Outcome: Content adapted to expertise level

Pattern 10: Approval Workflow

Trigger: Major decision needs consensus
Question: Multi-team approval with options
Outcome: Documented decision with stakeholder buy-in


βœ… Best Practices Summary

DO's

  • Be specific: "Which database type?" not "What should we use?"
  • Provide context: Include file names, scope, or impact
  • Order logically: General β†’ Specific; safest option first
  • Flag risks: Use "NOT RECOMMENDED" or "CAUTION:" prefixes
  • Explain trade-offs: Mention time, resources, complexity, performance
  • Limit options: 2-4 per question (not 5+ for decision paralysis)
  • Validate custom input: Check "Other" responses for validity
  • Batch related Q's: Keep related decisions together (max 4 questions)

DON'Ts

  • Overuse questions: Only ask when genuinely ambiguous
  • Too many options: 5+ options cause decision paralysis
  • Vague labels: "Option A", "Use tokens", "Option 2"
  • Skip descriptions: User needs rationale for informed choice
  • Hide trade-offs: Always mention implications and costs
  • Ask for obvious: Single clear path = no question needed
  • Recursive surveys: Avoid asking the same question twice
  • Ignore language: Always match user's configured conversation_language

πŸ”— Related Skills

  • moai-alfred-personas (Communication styles by user level)
  • moai-alfred-spec-authoring (SPEC clarity & structure)
  • moai-foundation-specs (SPEC format & requirements)
  • moai-alfred-language-detection (Conversation language handling)

πŸ“š Quick Reference Card

ScenarioActionKey Points
Vague requestAsk for clarification1-4 questions max
Multiple approachesLet user chooseShow trade-offs clearly
Risky operationGet explicit consentRequire final confirmation
Feature selectionUse multi-selectIndependent options only
Dependent decisionsUse sequential surveysAsk follow-ups based on answers
Custom inputValidate carefullyRe-ask if invalid
AccessibilityPlain text UINo complex formatting

Token Budget Optimization

  • Average per survey: 500-800 tokens
  • Typical workflow: 1-2 surveys per task (1,000-1,600 tokens total)
  • Benefit: Eliminates 3-5 clarification rounds (3,000-5,000 tokens saved)
  • ROI: Net savings of 1,400-3,500 tokens per interaction

For detailed API specifications: reference.md
For real-world examples: examples.md
Last Updated: 2025-11-12
Status: Production Ready (Enterprise v4.0.0)

GitHub Repository

modu-ai/moai-adk
Path: src/moai_adk/templates/.claude/skills/moai-alfred-ask-user-questions
agentic-aiagentic-codingagentic-workflowclaudeclaudecodevibe-coding

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

content-collections

Meta

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill