Create Jira Epic
About
This skill provides implementation guidance for creating well-structured Jira epics that organize related stories and tasks into cohesive work bodies. It is automatically invoked via the `/jira:create epic` command and ensures epics are created with proper scope definition and parent linking capabilities. The skill requires a configured MCP Jira server and appropriate user permissions to function.
Quick Install
Claude Code
Recommended/plugin add https://github.com/openshift-eng/ai-helpersgit clone https://github.com/openshift-eng/ai-helpers.git ~/.claude/skills/Create Jira EpicCopy and paste this command in Claude Code to install this skill
Documentation
Create Jira Epic
This skill provides implementation guidance for creating well-structured Jira epics that organize related stories and tasks into cohesive bodies of work.
When to Use This Skill
This skill is automatically invoked by the /jira:create epic command to guide the epic creation process.
Prerequisites
- MCP Jira server configured and accessible
- User has permissions to create issues in the target project
- Understanding of the epic scope and related work
What is an Epic?
An agile epic is:
- A body of work that can be broken down into specific items (stories/tasks)
- Based on the needs/requests of customers or end-users
- An important practice for agile and DevOps teams
- A tool to manage work at a higher level than individual stories
Epic Characteristics
Epics should:
- Be more narrow than a market problem or feature
- Be broader than a user story
- Fit inside a time box (quarter/release)
- Stories within the epic should fit inside a sprint
- Include acceptance criteria that define when the epic is done
Epic vs Feature vs Story
| Level | Scope | Duration | Example |
|---|---|---|---|
| Feature | Strategic objective, market problem | Multiple releases | "Advanced cluster observability" |
| Epic | Specific capability, narrower than feature | One quarter/release | "Multi-cluster metrics aggregation" |
| Story | Single user-facing functionality | One sprint | "As an SRE, I want to view metrics from multiple clusters in one dashboard" |
Epic Name Field Requirement
IMPORTANT: Many Jira configurations require the epic name field to be set.
- Epic Name = Summary (should be identical)
- This is a separate required field in addition to the summary field
- If epic name is missing, epic creation will fail
MCP Tool Handling:
- Some projects auto-populate epic name from summary
- Some require explicit epic name field
- Always set epic name = summary to ensure compatibility
Epic Description Best Practices
Clear Objective
The epic description should:
- State the overall goal or objective
- Explain the value or benefit
- Identify the target users or stakeholders
- Define the scope (what's included and excluded)
Good example:
Enable administrators to manage multiple hosted control plane clusters from a single observability dashboard.
This epic delivers unified metrics aggregation, alerting, and visualization across clusters, reducing the operational overhead of managing multiple cluster environments.
Target users: SREs, Platform administrators
Acceptance Criteria for Epics
Epic-level acceptance criteria define when the epic is complete:
Format:
h2. Epic Acceptance Criteria
* <High-level outcome 1>
* <High-level outcome 2>
* <High-level outcome 3>
Example:
h2. Epic Acceptance Criteria
* Administrators can view aggregated metrics from all clusters in a single dashboard
* Alert rules can be configured to fire based on cross-cluster conditions
* Historical metrics are retained for 30 days across all clusters
* Documentation is complete for multi-cluster setup and configuration
Characteristics:
- Broader than story AC (focus on overall capability, not implementation details)
- Measurable outcomes
- User-observable (not technical implementation)
- Typically 3-6 criteria (if more, consider splitting epic)
Timeboxing
Include timeframe information:
- Target quarter or release
- Key milestones or dependencies
- Known constraints
Example:
h2. Timeline
* Target: Q1 2025 / OpenShift 4.21
* Milestone 1: Metrics collection infrastructure (Sprint 1-2)
* Milestone 2: Dashboard and visualization (Sprint 3-4)
* Milestone 3: Alerting and historical data (Sprint 5-6)
Parent Link to Feature
If the epic belongs to a larger feature:
- Link to parent feature using
--parentflag - Explain how this epic contributes to the feature
- Reference feature key in description
Example:
h2. Parent Feature
This epic is part of [PROJ-100] "Advanced cluster observability" and specifically addresses the multi-cluster aggregation capability.
Interactive Epic Collection Workflow
When creating an epic, guide the user through:
1. Epic Objective
Prompt: "What is the main objective or goal of this epic? What capability will it deliver?"
Helpful questions:
- What is the overall goal?
- What high-level capability will be delivered?
- Who will benefit from this epic?
Example response:
Enable SREs to manage and monitor multiple ROSA HCP clusters from a unified observability dashboard, reducing the operational complexity of multi-cluster environments.
2. Epic Scope
Prompt: "What is included in this epic? What is explicitly out of scope?"
Helpful questions:
- What functionality is included?
- What related work is NOT part of this epic?
- Where are the boundaries?
Example response:
In scope:
- Metrics aggregation from multiple clusters
- Unified dashboard for cluster health
- Cross-cluster alerting
- 30-day historical metrics retention
Out of scope:
- Log aggregation (separate epic)
- Cost reporting (different feature)
- Support for non-HCP clusters (future work)
3. Epic Acceptance Criteria
Prompt: "What are the high-level outcomes that define this epic as complete?"
Guidance:
- Focus on capabilities, not implementation
- Should be measurable/demonstrable
- Typically 3-6 criteria
Example response:
- SREs can view aggregated metrics from all managed clusters in one dashboard
- Alert rules can be defined for cross-cluster conditions (e.g., "any cluster CPU >80%")
- Historical metrics available for 30 days
- Configuration documented and tested
4. Timeframe
Prompt: "What is the target timeframe for this epic? (quarter, release, or estimated sprints)"
Example responses:
- "Q1 2025"
- "OpenShift 4.21"
- "Estimate 6 sprints"
- "Must complete by March 2025"
5. Parent Feature (Optional)
Prompt: "Is this epic part of a larger feature? If yes, provide the feature key."
If yes:
- Validate parent exists
- Confirm parent is a Feature (not another Epic)
- Link epic to parent
Field Validation
Before submitting the epic, validate:
Required Fields
- ✅ Summary is clear and describes the capability
- ✅ Epic name field is set (same as summary)
- ✅ Description includes objective
- ✅ Epic acceptance criteria present
- ✅ Timeframe specified
- ✅ Component is specified (if required by project)
- ✅ Target version is set (if required by project)
Epic Quality
- ✅ Scope is broader than a story, narrower than a feature
- ✅ Can fit in a quarter/release timeframe
- ✅ Has measurable acceptance criteria
- ✅ Clearly identifies target users/stakeholders
- ✅ Defines what's in scope and out of scope
Parent Validation (if specified)
- ✅ Parent issue exists
- ✅ Parent is a Feature (not Epic or Story)
- ✅ Epic contributes to parent's objective
Security
- ✅ No credentials, API keys, or secrets in any field
MCP Tool Parameters
Basic Epic Creation
mcp__atlassian__jira_create_issue(
project_key="<PROJECT_KEY>",
summary="<epic summary>",
issue_type="Epic",
description="""
<Epic objective and description>
h2. Epic Acceptance Criteria
* <Outcome 1>
* <Outcome 2>
* <Outcome 3>
h2. Scope
h3. In Scope
* <What's included>
h3. Out of Scope
* <What's not included>
h2. Timeline
Target: <quarter/release>
""",
components="<component name>", # if required
additional_fields={
"customfield_epicname": "<epic name>", # if required, same as summary
# Add other project-specific fields
}
)
With Project-Specific Fields (e.g., CNTRLPLANE)
mcp__atlassian__jira_create_issue(
project_key="CNTRLPLANE",
summary="Multi-cluster metrics aggregation for ROSA HCP",
issue_type="Epic",
description="""
Enable SREs to manage and monitor multiple ROSA HCP clusters from a unified observability dashboard, reducing operational complexity of multi-cluster environments.
h2. Epic Acceptance Criteria
* SREs can view aggregated metrics from all managed clusters in one dashboard
* Alert rules can be defined for cross-cluster conditions (e.g., "any cluster CPU >80%")
* Historical metrics retained for 30 days across all clusters
* Multi-cluster setup documented and tested with production workloads
* Performance acceptable with 100+ clusters
h2. Scope
h3. In Scope
* Metrics aggregation across ROSA HCP clusters
* Unified dashboard for cluster health and performance
* Cross-cluster alerting capabilities
* 30-day historical metrics retention
* Configuration via CLI and API
h3. Out of Scope
* Log aggregation (separate epic CNTRLPLANE-200)
* Cost reporting (different feature)
* Support for standalone OCP clusters (future consideration)
* Integration with external monitoring systems (post-MVP)
h2. Timeline
* Target: Q1 2025 / OpenShift 4.21
* Estimated: 6 sprints
* Key milestone: MVP dashboard by end of Sprint 3
h2. Target Users
* SREs managing multiple ROSA HCP clusters
* Platform administrators
* Operations teams
h2. Dependencies
* Requires centralized metrics storage infrastructure ([CNTRLPLANE-150])
* Depends on cluster registration API ([CNTRLPLANE-175])
""",
components="HyperShift / ROSA",
additional_fields={
"customfield_12319940": "openshift-4.21", # target version
"customfield_epicname": "Multi-cluster metrics aggregation for ROSA HCP", # epic name
"labels": ["ai-generated-jira", "observability"],
"security": {"name": "Red Hat Employee"}
}
)
With Parent Feature
mcp__atlassian__jira_create_issue(
project_key="MYPROJECT",
summary="Multi-cluster monitoring dashboard",
issue_type="Epic",
description="<epic content>",
additional_fields={
"customfield_epicname": "Multi-cluster monitoring dashboard",
"parent": {"key": "MYPROJECT-100"} # link to parent feature
}
)
Jira Description Formatting
Use Jira's native formatting (Wiki markup):
Epic Template Format
<Epic objective - what capability will be delivered and why it matters>
h2. Epic Acceptance Criteria
* <High-level outcome 1>
* <High-level outcome 2>
* <High-level outcome 3>
h2. Scope
h3. In Scope
* <Functionality included in this epic>
* <Capabilities to be delivered>
h3. Out of Scope
* <Related work NOT in this epic>
* <Future considerations>
h2. Timeline
* Target: <quarter or release>
* Estimated: <sprints>
* Key milestones: <major deliverables>
h2. Target Users
* <User group 1>
* <User group 2>
h2. Dependencies (optional)
* [PROJ-XXX] - <dependency description>
h2. Parent Feature (if applicable)
This epic is part of [PROJ-YYY] "<feature name>" and addresses <how this epic contributes>.
Error Handling
Epic Name Field Missing
Scenario: Epic creation fails due to missing epic name field.
Action:
- Check if project requires epic name field
- If required, set
customfield_epicname= summary - Retry creation
Note: Field ID may vary by Jira instance:
customfield_epicname(common)customfield_10011(numbered field)- Check project configuration if standard field names don't work
Epic Too Large
Scenario: Epic seems too large (would take >1 quarter).
Action:
- Suggest splitting into multiple epics
- Identify natural split points
- Consider if this should be a Feature instead
Example:
This epic seems quite large (estimated 12+ sprints). Consider:
Option 1: Split into multiple epics
- Epic 1: Core metrics aggregation (sprints 1-6)
- Epic 2: Advanced dashboards and alerting (sprints 7-12)
Option 2: Create as Feature instead
- This might be better as a Feature with multiple child Epics
Which would you prefer?
Epic Too Small
Scenario: Epic could be completed in one sprint.
Action:
- Suggest creating as a Story instead
- Explain epic should be multi-sprint effort
Example:
This epic seems small enough to be a single Story (completable in one sprint).
Epics should typically:
- Span multiple sprints (2-8 sprints)
- Contain multiple stories
- Deliver a cohesive capability
Would you like to create this as a Story instead? (yes/no)
Parent Not a Feature
Scenario: User specifies parent, but it's not a Feature.
Action:
- Check parent issue type
- If parent is Epic or Story, inform user
- Suggest correction
Example:
Parent issue PROJ-100 is an Epic, but epics should typically link to Features (not other Epics).
Options:
1. Link to the parent Feature instead (if PROJ-100 has a parent)
2. Proceed without parent link
3. Create a Feature first, then link this Epic to it
What would you like to do?
Missing Acceptance Criteria
Scenario: User doesn't provide epic acceptance criteria.
Action:
- Explain importance of epic AC
- Ask probing questions
- Help construct AC
Example:
Epic acceptance criteria help define when this epic is complete. Let's add some.
What are the key outcomes that must be achieved?
- What capabilities will exist when this epic is done?
- How will you demonstrate the epic is complete?
- What must work end-to-end?
Example: "Administrators can view aggregated metrics from all clusters"
Security Validation Failure
Scenario: Sensitive data detected in epic content.
Action:
- STOP submission
- Inform user what type of data was detected
- Ask for redaction
MCP Tool Error
Scenario: MCP tool returns an error when creating the epic.
Action:
- Parse error message
- Provide user-friendly explanation
- Suggest corrective action
Common errors:
- "Field 'epic name' is required" → Set epic name = summary
- "Invalid parent" → Verify parent is Feature type
- "Issue type 'Epic' not available" → Check if project supports Epics
Examples
Example 1: Epic with Full Details
Input:
/jira:create epic CNTRLPLANE "Multi-cluster metrics aggregation"
Interactive prompts:
What is the main objective of this epic?
> Enable SREs to monitor multiple ROSA HCP clusters from one dashboard
What is included in scope?
> Metrics aggregation, unified dashboard, cross-cluster alerting, 30-day retention
What is out of scope?
> Log aggregation, cost reporting, non-HCP cluster support
Epic acceptance criteria?
> - View aggregated metrics from all clusters
> - Configure cross-cluster alerts
> - 30-day historical retention
> - Complete documentation
Timeframe?
> Q1 2025, estimate 6 sprints
Parent feature? (optional)
> CNTRLPLANE-100
Result:
- Epic created with complete description
- Linked to parent feature
- All CNTRLPLANE conventions applied
Example 2: Epic with Auto-Detection
Input:
/jira:create epic CNTRLPLANE "Advanced node pool autoscaling for ARO HCP"
Auto-applied (via cntrlplane skill):
- Component: HyperShift / ARO (detected from "ARO HCP")
- Target Version: openshift-4.21
- Epic Name: Same as summary
- Labels: ai-generated-jira
- Security: Red Hat Employee
Interactive prompts:
- Epic objective and scope
- Acceptance criteria
- Timeframe
Result:
- Full epic with ARO component
Example 3: Standalone Epic (No Parent)
Input:
/jira:create epic MYPROJECT "Improve test coverage for API endpoints"
Result:
- Epic created without parent
- Standard epic fields applied
- Ready for stories to be linked
Best Practices Summary
- Clear objective: State what capability will be delivered
- Define scope: Explicitly state what's in and out of scope
- Epic AC: High-level outcomes that define "done"
- Right size: 2-8 sprints, fits in a quarter
- Timebox: Specify target quarter/release
- Link to feature: If part of larger initiative
- Target users: Identify who benefits
- Epic name field: Always set (same as summary)
Anti-Patterns to Avoid
❌ Epic is actually a story
"As a user, I want to view a dashboard"
✅ Too small, create as Story instead
❌ Epic is actually a feature
"Complete observability platform redesign" (12 months, 50+ stories)
✅ Too large, create as Feature with child Epics
❌ Vague acceptance criteria
- Epic is done when everything works
✅ Be specific: "SREs can view metrics from 100+ clusters with <1s load time"
❌ Implementation details in AC
- Backend uses PostgreSQL for metrics storage
- API implements gRPC endpoints
✅ Focus on outcomes, not implementation
❌ No scope definition
Description: "Improve monitoring"
✅ Define what's included and what's not
Workflow Summary
- ✅ Parse command arguments (project, summary, flags)
- 🔍 Auto-detect component from summary keywords
- ⚙️ Apply project-specific defaults
- 💬 Interactively collect epic objective and scope
- 💬 Interactively collect epic acceptance criteria
- 💬 Collect timeframe and parent link (if applicable)
- 🔒 Scan for sensitive data
- ✅ Validate epic size and quality
- ✅ Set epic name field = summary
- 📝 Format description with Jira markup
- ✅ Create epic via MCP tool
- 📤 Return issue key and URL
See Also
/jira:create- Main command that invokes this skillcreate-featureskill - For larger strategic initiativescreate-storyskill - For stories within epicscntrlplaneskill - CNTRLPLANE specific conventions- Agile epic management best practices
GitHub Repository
Related Skills
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
Algorithmic Art Generation
MetaThis skill helps developers create algorithmic art using p5.js, focusing on generative art, computational aesthetics, and interactive visualizations. It automatically activates for topics like "generative art" or "p5.js visualization" and guides you through creating unique algorithms with features like seeded randomness, flow fields, and particle systems. Use it when you need to build reproducible, code-driven artistic patterns.
webapp-testing
TestingThis Claude Skill provides a Playwright-based toolkit for testing local web applications through Python scripts. It enables frontend verification, UI debugging, screenshot capture, and log viewing while managing server lifecycles. Use it for browser automation tasks but run scripts directly rather than reading their source code to avoid context pollution.
requesting-code-review
DesignThis skill dispatches a code-reviewer subagent to analyze code changes against requirements before proceeding. It should be used after completing tasks, implementing major features, or before merging to main. The review helps catch issues early by comparing the current implementation with the original plan.
