implementation
关于
This Claude Skill implements features by generating code, tests, and documentation, following approved designs. It is used after the design phase to build features while adhering to TDD and project coding standards. The skill utilizes tools like Read, Write, Edit, and Bash to systematically develop high-quality software.
技能文档
Feature Implementation Skill
Purpose
This skill provides systematic guidance for implementing features with high-quality code, comprehensive tests, and proper documentation, following project standards and best practices.
When to Use
- After design phase is complete and approved
- Need to implement code for a feature
- Writing unit and integration tests
- Creating technical documentation
- Following TDD (Test-Driven Development) workflow
Implementation Workflow
1. Setup and Preparation
Review Design Document:
- Read architecture design from previous phase
- Understand component structure
- Review API contracts and data models
- Note security and performance requirements
Setup Development Environment:
# Activate virtual environment
source venv/bin/activate # or: uv venv && source .venv/bin/activate
# Install dependencies
pip install -r requirements.txt
# or: uv pip install -r requirements.txt
# Install dev dependencies
pip install -e ".[dev]"
Create Feature Branch:
git checkout -b feature/feature-name
Deliverable: Development environment ready
2. Test-Driven Development (TDD)
TDD Cycle: Red → Green → Refactor
Step 1: Write Failing Test (Red)
# tests/test_feature.py
import pytest
from feature import process_data
def test_process_data_success():
"""Test successful data processing."""
# Arrange
input_data = {"name": "test", "value": 123}
# Act
result = process_data(input_data)
# Assert
assert result.name == "test"
assert result.value == 123
Step 2: Write Minimal Code (Green)
# src/tools/feature/core.py
def process_data(input_data: dict):
"""Process input data."""
# Minimal implementation to pass test
return type('Result', (), input_data)()
Step 3: Refactor (Refactor)
# src/tools/feature/core.py
from .models import InputModel, ResultModel
def process_data(input_data: dict) -> ResultModel:
"""
Process input data and return result.
Args:
input_data: Input data dictionary
Returns:
ResultModel with processed data
Raises:
ValidationError: If input is invalid
"""
# Proper implementation with validation
validated = InputModel(**input_data)
return ResultModel(
name=validated.name,
value=validated.value
)
Repeat: Write next test, implement, refactor
Deliverable: Tested, working code
3. Code Implementation
Follow Project Structure:
src/tools/feature_name/
├── __init__.py # Public exports
├── models.py # Pydantic models (data)
├── interfaces.py # Abstract interfaces
├── core.py # Core business logic
├── repository.py # Data access layer
├── validators.py # Input validation
├── utils.py # Helper functions
├── config.py # Configuration
├── exceptions.py # Custom exceptions
└── main.py # CLI entry point (if applicable)
Coding Standards:
Refer to code-style-guide.md for:
- PEP 8 style guide
- Type hints for all functions
- Google-style docstrings
- 500-line file limit
- Single responsibility principle
Example Implementation:
# src/tools/feature/models.py
from pydantic import BaseModel, Field
from typing import Optional
from datetime import datetime
class FeatureInput(BaseModel):
"""Input model for feature."""
name: str = Field(..., min_length=1, max_length=100)
value: int = Field(..., ge=0)
class Config:
validate_assignment = True
class FeatureOutput(BaseModel):
"""Output model for feature."""
id: Optional[int] = None
name: str
value: int
created_at: datetime = Field(default_factory=datetime.utcnow)
# src/tools/feature/core.py
from .models import FeatureInput, FeatureOutput
from .repository import FeatureRepository
from .validators import FeatureValidator
class FeatureService:
"""Feature service with business logic."""
def __init__(
self,
repository: FeatureRepository,
validator: FeatureValidator
):
"""
Initialize service with dependencies.
Args:
repository: Repository for data access
validator: Validator for input validation
"""
self.repository = repository
self.validator = validator
def create(self, input_data: FeatureInput) -> FeatureOutput:
"""
Create new feature resource.
Args:
input_data: Validated input data
Returns:
FeatureOutput with created resource
Raises:
ValidationError: If validation fails
RepositoryError: If save fails
"""
# Validate
self.validator.validate_create(input_data)
# Create
output = FeatureOutput(
name=input_data.name,
value=input_data.value
)
# Persist
saved = self.repository.save(output)
return saved
Deliverable: Implemented core functionality
4. Testing Implementation
Testing Checklist:
Refer to testing-checklist.md for comprehensive coverage
Unit Tests (80%+ Coverage):
# tests/test_core.py
import pytest
from unittest.mock import Mock, MagicMock
from feature.core import FeatureService
from feature.models import FeatureInput, FeatureOutput
@pytest.fixture
def mock_repository():
"""Mock repository for testing."""
repo = Mock()
repo.save.return_value = FeatureOutput(
id=1,
name="test",
value=123
)
return repo
@pytest.fixture
def mock_validator():
"""Mock validator for testing."""
validator = Mock()
validator.validate_create.return_value = None
return validator
@pytest.fixture
def service(mock_repository, mock_validator):
"""Service fixture with mocked dependencies."""
return FeatureService(
repository=mock_repository,
validator=mock_validator
)
def test_create_success(service, mock_repository):
"""Test successful creation."""
# Arrange
input_data = FeatureInput(name="test", value=123)
# Act
result = service.create(input_data)
# Assert
assert result.name == "test"
assert result.value == 123
mock_repository.save.assert_called_once()
def test_create_validation_error(service, mock_validator):
"""Test validation error handling."""
# Arrange
input_data = FeatureInput(name="test", value=123)
mock_validator.validate_create.side_effect = ValidationError("Invalid")
# Act & Assert
with pytest.raises(ValidationError):
service.create(input_data)
Integration Tests:
# tests/integration/test_feature_integration.py
import pytest
from pathlib import Path
from feature import FeatureService, FileSystemRepository
@pytest.fixture
def temp_data_dir(tmp_path):
"""Temporary directory for test data."""
return tmp_path / "data"
def test_create_and_retrieve(temp_data_dir):
"""Test end-to-end create and retrieve."""
# Arrange
repo = FileSystemRepository(temp_data_dir)
service = FeatureService(repo)
# Act: Create
created = service.create(FeatureInput(name="test", value=123))
# Act: Retrieve
retrieved = service.get(created.id)
# Assert
assert retrieved.name == "test"
assert retrieved.value == 123
Run Tests:
# Run all tests with coverage
pytest --cov=src --cov-report=html --cov-report=term
# Run specific test file
pytest tests/test_core.py -v
# Run with markers
pytest -m "not slow" -v
Deliverable: Comprehensive test suite (80%+ coverage)
5. Code Quality Checks
Run Formatters and Linters:
# Format code with Black
black src/ tests/
# Type check with mypy
mypy src/
# Lint with flake8 (if configured)
flake8 src/ tests/
# Run all checks
make lint # If Makefile configured
Pre-commit Hooks (If Configured):
# Run pre-commit checks
pre-commit run --all-files
Code Review Checklist:
- All functions have type hints
- All functions have docstrings
- No files exceed 500 lines
- Tests achieve 80%+ coverage
- No lint errors or warnings
- Error handling implemented
- Logging added where appropriate
- Security best practices followed
Deliverable: Quality-checked code
6. Documentation
Code Documentation:
- Docstrings for all public functions/classes
- Inline comments for complex logic
- Type hints for clarity
Technical Documentation:
# Feature Implementation
## Overview
[What was implemented]
## Architecture
[Actual structure (may differ from design)]
## Usage Examples
```python
from feature import FeatureService
service = FeatureService()
result = service.create(name="example")
Configuration
Required environment variables:
FEATURE_API_KEY: API key for serviceFEATURE_TIMEOUT: Timeout in seconds (default: 30)
Testing
pytest tests/test_feature.py
Known Issues
Future Enhancements
- [Enhancement 1]
**User Documentation (If Applicable):**
- Usage guide in `docs/guides/`
- CLI help text
- Example configurations
**Deliverable:** Complete documentation
---
### 7. Integration and Verification
**Verify Against Requirements:**
- [ ] All acceptance criteria met
- [ ] Security checklist items addressed
- [ ] Performance requirements met
- [ ] Edge cases handled
- [ ] Error scenarios tested
**Manual Testing:**
```bash
# Test CLI (if applicable)
python -m src.tools.feature.main create --name test
# Test with real data
python -m src.tools.feature.main --input sample.json
# Test error cases
python -m src.tools.feature.main --invalid-input
Integration with Existing Code:
- Imports work correctly
- No circular dependencies
- Backward compatibility maintained (if applicable)
- No breaking changes to public APIs
Deliverable: Verified, working feature
Code Style Guidelines
Python Style (PEP 8)
Imports:
# Standard library
import os
import sys
from pathlib import Path
# Third-party
import click
from pydantic import BaseModel
# Local
from .models import FeatureModel
from .exceptions import FeatureError
Naming:
# Classes: PascalCase
class FeatureService:
pass
# Functions/methods: snake_case
def process_data():
pass
# Constants: UPPER_SNAKE_CASE
MAX_RETRIES = 3
# Private: leading underscore
def _internal_helper():
pass
Type Hints:
from typing import Optional, List, Dict, Union
def function(
required: str,
optional: Optional[int] = None,
items: List[str] = None
) -> Dict[str, Any]:
pass
Docstrings (Google Style):
def function(param1: str, param2: int) -> bool:
"""
Short description.
Longer description if needed.
Args:
param1: Description of param1
param2: Description of param2
Returns:
Description of return value
Raises:
ValueError: When this happens
"""
pass
Testing Best Practices
Pytest Conventions
Test File Naming:
test_*.pyor*_test.py- Mirror source structure:
src/core.py→tests/test_core.py
Test Function Naming:
def test_function_name_condition_expected_result():
"""Test description."""
pass
# Examples:
def test_create_feature_valid_input_returns_feature():
pass
def test_validate_input_missing_name_raises_error():
pass
Test Structure (Arrange-Act-Assert):
def test_example():
"""Test example."""
# Arrange: Setup test data and mocks
input_data = {"name": "test"}
mock_service = Mock()
# Act: Execute the code being tested
result = function_under_test(input_data, mock_service)
# Assert: Verify expected outcomes
assert result == expected
mock_service.method.assert_called_once()
Fixtures:
# tests/conftest.py (shared fixtures)
import pytest
@pytest.fixture
def sample_data():
"""Sample data for tests."""
return {"name": "test", "value": 123}
@pytest.fixture
def temp_directory(tmp_path):
"""Temporary directory for test files."""
test_dir = tmp_path / "test_data"
test_dir.mkdir()
yield test_dir
# Cleanup happens automatically
Parametrize for Multiple Cases:
@pytest.mark.parametrize("input_value,expected", [
("valid", True),
("invalid", False),
("", False),
])
def test_validation(input_value, expected):
"""Test validation with multiple inputs."""
result = validate(input_value)
assert result == expected
Common Patterns
Error Handling Pattern
from typing import Optional
import logging
logger = logging.getLogger(__name__)
def process_data(data: dict) -> Result:
"""Process data with proper error handling."""
try:
# Validate
validated = validate_data(data)
# Process
result = perform_processing(validated)
return result
except ValidationError as e:
logger.warning(f"Validation failed: {e}")
raise
except ProcessingError as e:
logger.error(f"Processing failed: {e}", exc_info=True)
raise
except Exception as e:
logger.exception(f"Unexpected error: {e}")
raise ProcessingError("Unexpected error occurred") from e
Dependency Injection Pattern
from abc import ABC, abstractmethod
# Interface
class Repository(ABC):
@abstractmethod
def save(self, data) -> None:
pass
# Implementation
class FileRepository(Repository):
def save(self, data) -> None:
# File-based implementation
pass
# Service with dependency injection
class Service:
def __init__(self, repository: Repository):
self.repository = repository # Injected dependency
def create(self, data):
# Use injected repository
self.repository.save(data)
# Usage
repo = FileRepository()
service = Service(repository=repo) # Inject dependency
Configuration Pattern
from pydantic_settings import BaseSettings
class Config(BaseSettings):
"""Application configuration."""
api_key: str
timeout: int = 30
debug: bool = False
class Config:
env_prefix = "FEATURE_"
env_file = ".env"
# Usage
config = Config() # Loads from environment/file
service = Service(api_key=config.api_key, timeout=config.timeout)
Supporting Resources
- code-style-guide.md: Detailed Python style guidelines
- testing-checklist.md: Comprehensive testing requirements
- scripts/generate_tests.py: Test scaffolding automation
Integration with Feature Implementation Flow
Input: Approved architecture design Process: TDD implementation with quality checks Output: Tested, documented code Next Step: Validation skill for quality assurance
Implementation Checklist
Before marking feature complete:
- All code implemented per design
- Unit tests written (80%+ coverage)
- Integration tests written
- All tests passing
- Code formatted (Black)
- Type checking passing (mypy)
- No lint errors
- Docstrings complete
- Technical documentation written
- User documentation written (if applicable)
- Manual testing completed
- Security considerations addressed
- Performance requirements met
- Code reviewed (if applicable)
- Ready for validation phase
快速安装
/plugin add https://github.com/matteocervelli/llms/tree/main/implementation在 Claude Code 中复制并粘贴此命令以安装该技能
GitHub 仓库
相关推荐技能
langchain
元LangChain是一个用于构建LLM应用程序的框架,支持智能体、链和RAG应用开发。它提供多模型提供商支持、500+工具集成、记忆管理和向量检索等核心功能。开发者可用它快速构建聊天机器人、问答系统和自主代理,适用于从原型验证到生产部署的全流程。
evaluating-llms-harness
测试该Skill通过60+个学术基准测试(如MMLU、GSM8K等)评估大语言模型质量,适用于模型对比、学术研究及训练进度追踪。它支持HuggingFace、vLLM和API接口,被EleutherAI等行业领先机构广泛采用。开发者可通过简单命令行快速对模型进行多任务批量评估。
go-test
元go-test Skill为Go开发者提供全面的测试指导,涵盖单元测试、性能基准测试和集成测试的最佳实践。它能帮助您正确实现表驱动测试、子测试组织、mock接口和竞态检测,同时指导测试覆盖率分析和性能基准测试。当您编写_test.go文件、设计测试用例或优化测试策略时,这个Skill能确保您遵循Go语言的标准测试惯例。
project-structure
元这个Skill为开发者提供全面的项目目录结构设计指南和最佳实践。它涵盖了多种项目类型包括monorepo、前后端框架、库和扩展的标准组织结构。帮助团队创建可扩展、易维护的代码架构,特别适用于新项目设计、遗留项目迁移和团队规范制定。
