hypothesis-generation
About
This skill generates testable hypotheses from observations or preliminary data, enabling developers to formulate evidence-based explanations and design experiments. It is used for exploring competing explanations, developing predictions, and proposing mechanisms during scientific inquiry. Its key capabilities include systematic hypothesis generation for research planning across various scientific domains.
Quick Install
Claude Code
Recommended/plugin add https://github.com/K-Dense-AI/claude-scientific-writergit clone https://github.com/K-Dense-AI/claude-scientific-writer.git ~/.claude/skills/hypothesis-generationCopy and paste this command in Claude Code to install this skill
Documentation
Scientific Hypothesis Generation
Overview
Hypothesis generation is a systematic process for developing testable explanations. Formulate evidence-based hypotheses from observations, design experiments, explore competing explanations, and develop predictions. Apply this skill for scientific inquiry across domains.
When to Use This Skill
This skill should be used when:
- Developing hypotheses from observations or preliminary data
- Designing experiments to test scientific questions
- Exploring competing explanations for phenomena
- Formulating testable predictions for research
- Conducting literature-based hypothesis generation
- Planning mechanistic studies across scientific domains
Workflow
Follow this systematic process to generate robust scientific hypotheses:
1. Understand the Phenomenon
Start by clarifying the observation, question, or phenomenon that requires explanation:
- Identify the core observation or pattern that needs explanation
- Define the scope and boundaries of the phenomenon
- Note any constraints or specific contexts
- Clarify what is already known vs. what is uncertain
- Identify the relevant scientific domain(s)
2. Conduct Comprehensive Literature Search
Search existing scientific literature to ground hypotheses in current evidence. Use both PubMed (for biomedical topics) and general web search (for broader scientific domains):
For biomedical topics:
- Use WebFetch with PubMed URLs to access relevant literature
- Search for recent reviews, meta-analyses, and primary research
- Look for similar phenomena, related mechanisms, or analogous systems
For all scientific domains:
- Use WebSearch to find recent papers, preprints, and reviews
- Search for established theories, mechanisms, or frameworks
- Identify gaps in current understanding
Search strategy:
- Begin with broad searches to understand the landscape
- Narrow to specific mechanisms, pathways, or theories
- Look for contradictory findings or unresolved debates
- Consult
references/literature_search_strategies.mdfor detailed search techniques
3. Synthesize Existing Evidence
Analyze and integrate findings from literature search:
- Summarize current understanding of the phenomenon
- Identify established mechanisms or theories that may apply
- Note conflicting evidence or alternative viewpoints
- Recognize gaps, limitations, or unanswered questions
- Identify analogies from related systems or domains
4. Generate Competing Hypotheses
Develop 3-5 distinct hypotheses that could explain the phenomenon. Each hypothesis should:
- Provide a mechanistic explanation (not just description)
- Be distinguishable from other hypotheses
- Draw on evidence from the literature synthesis
- Consider different levels of explanation (molecular, cellular, systemic, population, etc.)
Strategies for generating hypotheses:
- Apply known mechanisms from analogous systems
- Consider multiple causative pathways
- Explore different scales of explanation
- Question assumptions in existing explanations
- Combine mechanisms in novel ways
5. Evaluate Hypothesis Quality
Assess each hypothesis against established quality criteria from references/hypothesis_quality_criteria.md:
Testability: Can the hypothesis be empirically tested? Falsifiability: What observations would disprove it? Parsimony: Is it the simplest explanation that fits the evidence? Explanatory Power: How much of the phenomenon does it explain? Scope: What range of observations does it cover? Consistency: Does it align with established principles? Novelty: Does it offer new insights beyond existing explanations?
Explicitly note the strengths and weaknesses of each hypothesis.
6. Design Experimental Tests
For each viable hypothesis, propose specific experiments or studies to test it. Consult references/experimental_design_patterns.md for common approaches:
Experimental design elements:
- What would be measured or observed?
- What comparisons or controls are needed?
- What methods or techniques would be used?
- What sample sizes or statistical approaches are appropriate?
- What are potential confounds and how to address them?
Consider multiple approaches:
- Laboratory experiments (in vitro, in vivo, computational)
- Observational studies (cross-sectional, longitudinal, case-control)
- Clinical trials (if applicable)
- Natural experiments or quasi-experimental designs
7. Formulate Testable Predictions
For each hypothesis, generate specific, quantitative predictions:
- State what should be observed if the hypothesis is correct
- Specify expected direction and magnitude of effects when possible
- Identify conditions under which predictions should hold
- Distinguish predictions between competing hypotheses
- Note predictions that would falsify the hypothesis
8. Present Structured Output
Use the template in assets/hypothesis_output_template.md to present hypotheses in a clear, consistent format:
Standard structure:
- Background & Context - Phenomenon and literature summary
- Competing Hypotheses - Enumerated hypotheses with mechanistic explanations
- Quality Assessment - Evaluation of each hypothesis
- Experimental Designs - Proposed tests for each hypothesis
- Testable Predictions - Specific, measurable predictions
- Critical Comparisons - How to distinguish between hypotheses
Quality Standards
Ensure all generated hypotheses meet these standards:
- Evidence-based: Grounded in existing literature with citations
- Testable: Include specific, measurable predictions
- Mechanistic: Explain how/why, not just what
- Comprehensive: Consider alternative explanations
- Rigorous: Include experimental designs to test predictions
Resources
references/
hypothesis_quality_criteria.md- Framework for evaluating hypothesis quality (testability, falsifiability, parsimony, explanatory power, scope, consistency)experimental_design_patterns.md- Common experimental approaches across domains (RCTs, observational studies, lab experiments, computational models)literature_search_strategies.md- Effective search techniques for PubMed and general scientific sources
assets/
hypothesis_output_template.md- Structured format for presenting hypotheses consistently with all required sections
GitHub Repository
Related Skills
sglang
MetaSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
content-collections
MetaThis skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.
llamaguard
OtherLlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.
