testing-infrastructure
About
This skill provides complete testing infrastructure for a monorepo using Vitest as the primary test runner. It covers unit testing, Storybook integration via addon-vitest, React Testing Library patterns, and mock data setup. Use it when writing tests, configuring test environments, troubleshooting failures, or working with CI pipelines.
Documentation
Testing Infrastructure
Test Runners
Vitest
Primary test runner for:
- Unit tests (
.test.ts,.test.tsx) - Storybook story tests (via addon-vitest)
Running Tests
# Unit tests only
pnpm test
# Specific package
pnpm test --filter @lsst-sqre/squared
# Storybook tests
pnpm test-storybook
pnpm test-storybook:watch
pnpm test-storybook --filter @lsst-sqre/squared
# Comprehensive CI pipeline
pnpm run localci # Or use test-suite-runner agent
Vitest Configuration
Squared Package
Two test projects in vitest.config.ts:
Unit tests:
- Files:
src/**/*.test.{ts,tsx} - Environment: jsdom
- Setup:
src/test-setup.ts - CSS Modules: non-scoped strategy
Storybook tests:
- Browser: Playwright (chromium)
- Environment: browser + jsdom
- Setup:
.storybook/vitest.setup.ts - Plugin:
@storybook/addon-vitest
Writing Unit Tests
Basic Pattern
import { describe, it, expect, vi } from 'vitest';
import { render, screen } from '@testing-library/react';
import userEvent from '@testing-library/user-event';
import MyComponent from './MyComponent';
describe('MyComponent', () => {
it('renders correctly', () => {
render(<MyComponent title="Test" />);
expect(screen.getByText('Test')).toBeInTheDocument();
});
it('handles user interaction', async () => {
const handleClick = vi.fn();
render(<MyComponent onClick={handleClick} />);
await userEvent.click(screen.getByRole('button'));
expect(handleClick).toHaveBeenCalled();
});
});
React Testing Library
Query priority:
getByRole- Most accessiblegetByLabelText- For formsgetByPlaceholderText- For inputsgetByText- For contentgetByTestId- Last resort
// Good - semantic queries
screen.getByRole('button', { name: 'Submit' });
screen.getByLabelText('Email address');
// Avoid - implementation details
container.querySelector('.button');
User Interactions
import userEvent from '@testing-library/user-event';
it('handles form submission', async () => {
const user = userEvent.setup();
render(<MyForm onSubmit={handleSubmit} />);
await user.type(screen.getByLabelText('Email'), '[email protected]');
await user.click(screen.getByRole('button', { name: 'Submit' }));
expect(handleSubmit).toHaveBeenCalledWith({
email: '[email protected]',
});
});
Storybook Tests
In Stories
import { expect, within, userEvent } from '@storybook/test';
export const WithInteraction: Story = {
tags: ['test'],
play: async ({ canvasElement }) => {
const canvas = within(canvasElement);
// Assert initial state
expect(canvas.getByText('Click me')).toBeInTheDocument();
// Interact
await userEvent.click(canvas.getByRole('button'));
// Assert result
expect(canvas.getByText('Clicked!')).toBeInTheDocument();
},
};
Running Story Tests
# Run all story tests
pnpm test-storybook --filter @lsst-sqre/squared
# Watch mode
pnpm test-storybook:watch --filter @lsst-sqre/squared
# Specific story
pnpm test-storybook --filter @lsst-sqre/squared -- --grep "MyComponent"
Mocking
Mock Functions
import { vi } from 'vitest';
const mockFn = vi.fn();
const mockFnWithReturn = vi.fn(() => 'result');
expect(mockFn).toHaveBeenCalledWith('arg');
expect(mockFn).toHaveBeenCalledTimes(2);
Mock Modules
vi.mock('../api/client', () => ({
fetchData: vi.fn(() => Promise.resolve({ data: 'mock' })),
}));
Mock SWR
import useSWR from 'swr';
vi.mock('swr');
it('renders with data', () => {
vi.mocked(useSWR).mockReturnValue({
data: { name: 'Test' },
error: undefined,
isLoading: false,
isValidating: false,
mutate: vi.fn(),
});
render(<MyComponent />);
expect(screen.getByText('Test')).toBeInTheDocument();
});
Mock Data
Mock Data Patterns
// src/lib/mocks/userData.ts
export const mockUserData = {
username: 'testuser',
email: '[email protected]',
groups: ['admin'],
};
export const mockUserDataLoading = {
data: undefined,
error: undefined,
isLoading: true,
};
export const mockUserDataError = {
data: undefined,
error: new Error('Failed to fetch'),
isLoading: false,
};
MSW (Mock Service Worker)
For API mocking in Storybook:
// .storybook/preview.tsx
import { initialize, mswLoader } from 'msw-storybook-addon';
import { handlers } from '../src/mocks/handlers';
initialize({ onUnhandledRequest: 'bypass' });
export const loaders = [mswLoader];
export const parameters = {
msw: {
handlers,
},
};
// src/mocks/handlers.ts
import { http, HttpResponse } from 'msw';
export const handlers = [
http.get('/api/user-info', () => {
return HttpResponse.json({
username: 'testuser',
email: '[email protected]',
});
}),
];
Testing Configuration Components
import { AppConfigProvider } from '../contexts/AppConfigContext';
const mockConfig = {
siteName: 'Test Site',
baseUrl: 'http://localhost:3000',
// ... other required config
};
function renderWithConfig(ui: React.ReactElement) {
return render(
<AppConfigProvider config={mockConfig}>
{ui}
</AppConfigProvider>
);
}
it('uses config', () => {
renderWithConfig(<MyComponent />);
expect(screen.getByText('Test Site')).toBeInTheDocument();
});
Async Testing
Waiting for Elements
// Wait for element to appear
await screen.findByText('Loaded data');
// Wait for element to disappear
await waitForElementToBeRemoved(() => screen.queryByText('Loading...'));
// Custom wait
await waitFor(() => {
expect(screen.getByText('Done')).toBeInTheDocument();
});
Testing Async Functions
it('fetches data', async () => {
const { result } = renderHook(() => useMyData());
// Wait for loading to complete
await waitFor(() => {
expect(result.current.isLoading).toBe(false);
});
expect(result.current.data).toEqual(expectedData);
});
Coverage
# Run tests with coverage
pnpm test -- --coverage
# Coverage thresholds in vitest.config.ts
test: {
coverage: {
reporter: ['text', 'json', 'html'],
thresholds: {
lines: 80,
functions: 80,
branches: 80,
statements: 80,
},
},
}
Debugging Tests
# Run single test file
pnpm test --filter @lsst-sqre/squared -- MyComponent.test.tsx
# Run in watch mode
pnpm test --filter @lsst-sqre/squared -- --watch
# Debug in browser (for Storybook tests)
pnpm test-storybook:watch --filter @lsst-sqre/squared
Debug Queries
import { screen } from '@testing-library/react';
// Print DOM tree
screen.debug();
// Print specific element
screen.debug(screen.getByRole('button'));
// Log available roles
screen.logTestingPlaygroundURL();
CI Pipeline
test-suite-runner Agent
Use the test-suite-runner agent (via Task tool) for comprehensive testing:
// Agent runs: pnpm run localci
// Which includes:
// - pnpm format:check
// - pnpm lint
// - pnpm type-check
// - pnpm test
// - pnpm build
The agent:
- Runs full CI pipeline
- Analyzes failures across all stages
- Provides detailed failure reports
- Suggests fixes
Best Practices
- Test behavior, not implementation
- Use semantic queries (getByRole, etc.)
- Mock external dependencies (APIs, modules)
- Test user interactions with userEvent
- Use descriptive test names
- Keep tests focused and small
- Avoid testing internal state
- Test accessibility with role queries
- Use story tests for visual testing
- Run tests before committing
Quick Install
/plugin add https://github.com/lsst-sqre/squareone/tree/main/testing-infrastructureCopy and paste this command in Claude Code to install this skill
GitHub 仓库
Related Skills
sglang
MetaSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
llamaguard
OtherLlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
