Back to Skills

performing-visual-regression-testing

jeremylongshore
Updated Today
72 views
409
51
409
View on GitHub
Metaaitestingdesign

About

This skill enables Claude to execute visual regression tests using tools like Percy and Chromatic to capture screenshots and compare them against baselines. It automatically identifies unintended UI changes by analyzing visual differences in web applications or components. Use it when you need to verify UI changes, perform regression testing, or check for visual inconsistencies.

Documentation

Overview

This skill empowers Claude to automatically detect unintended UI changes by performing visual regression tests. It integrates with popular visual testing tools to streamline the process of capturing screenshots, comparing them against baselines, and identifying visual differences.

How It Works

  1. Capture Screenshots: Captures screenshots of specified components or pages using the configured visual testing tool.
  2. Compare Against Baselines: Compares the captured screenshots against established baseline images.
  3. Analyze Visual Diffs: Identifies and analyzes visual differences between the current screenshots and the baselines.

When to Use This Skill

This skill activates when you need to:

  • Detect unintended UI changes introduced by recent code modifications.
  • Verify the visual consistency of a web application across different browsers or environments.
  • Automate visual regression testing as part of a CI/CD pipeline.

Examples

Example 1: Verifying UI Changes After a Feature Update

User request: "Run a visual test on the homepage to check for any UI regressions after the latest feature update."

The skill will:

  1. Capture a screenshot of the homepage.
  2. Compare the screenshot against the baseline image of the homepage.
  3. Report any visual differences detected, highlighting potential UI regressions.

Example 2: Checking Visual Consistency Across Browsers

User request: "Perform a visual regression test on the product details page to ensure it renders correctly in Chrome and Firefox."

The skill will:

  1. Capture screenshots of the product details page in both Chrome and Firefox.
  2. Compare the screenshots against the respective baseline images for each browser.
  3. Identify and report any visual inconsistencies detected between the browsers.

Best Practices

  • Configuration: Ensure the visual testing tool is properly configured with the correct API keys and project settings.
  • Baselines: Maintain accurate and up-to-date baseline images to avoid false positives.
  • Viewport Sizes: Define appropriate viewport sizes to cover different screen resolutions and devices.

Integration

This skill can be integrated with other Claude Code plugins to automate end-to-end testing workflows. For example, it can be combined with a testing plugin to run visual tests after functional tests have passed.

Quick Install

/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/visual-regression-tester

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

jeremylongshore/claude-code-plugins-plus
Path: backups/skills-migration-20251108-070147/plugins/testing/visual-regression-tester/skills/visual-regression-tester
aiautomationclaude-codedevopsmarketplacemcp

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill