MCP HubMCP Hub
スキル一覧に戻る

csv-data-wrangler

majiayu000
更新日 2 days ago
24 閲覧
58
9
58
GitHubで表示
その他data

について

このClaudeスキルは、Python、DuckDB、コマンドラインツールを使用した高性能なCSV処理とデータクリーニングを専門としています。大容量ファイルの効率的な処理、エンコーディング問題の解決、データセットの変換を得意とします。データのクリーニング、ファイルの結合、SQLを用いたCSVのクエリ実行などのタスクにご利用ください。

クイックインストール

Claude Code

推奨
プラグインコマンド推奨
/plugin add https://github.com/majiayu000/claude-skill-registry
Git クローン代替
git clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/csv-data-wrangler

このコマンドをClaude Codeにコピー&ペーストしてスキルをインストールします

ドキュメント

CSV Data Wrangler

Purpose

Provides expertise in efficient CSV file processing, data cleaning, and transformation. Handles large files, encoding issues, malformed data, and performance optimization for tabular data workflows.

When to Use

  • Processing large CSV files efficiently
  • Cleaning and validating CSV data
  • Transforming and reshaping datasets
  • Handling encoding and delimiter issues
  • Merging or splitting CSV files
  • Converting between tabular formats
  • Querying CSV with SQL (DuckDB)

Quick Start

Invoke this skill when:

  • Processing large CSV files efficiently
  • Cleaning and validating CSV data
  • Transforming and reshaping datasets
  • Handling encoding and delimiter issues
  • Querying CSV with SQL

Do NOT invoke when:

  • Building Excel files with formatting (use xlsx-skill)
  • Statistical analysis of data (use data-analyst)
  • Building data pipelines (use data-engineer)
  • Database operations (use sql-pro)

Decision Framework

Tool Selection by File Size:
├── < 100MB → pandas
├── 100MB - 1GB → pandas with chunking or polars
├── 1GB - 10GB → DuckDB or polars
├── > 10GB → DuckDB, Spark, or streaming
└── Quick exploration → csvkit or xsv CLI

Processing Type:
├── SQL-like queries → DuckDB
├── Complex transforms → pandas/polars
├── Simple filtering → csvkit/xsv
└── Streaming → Python csv module

Core Workflows

1. Large CSV Processing

  1. Profile file (size, encoding, delimiter)
  2. Choose appropriate tool for scale
  3. Process in chunks if memory-constrained
  4. Handle encoding issues (UTF-8, Latin-1)
  5. Validate data types per column
  6. Write output with proper quoting

2. Data Cleaning Pipeline

  1. Load sample to understand structure
  2. Identify missing and malformed values
  3. Define cleaning rules per column
  4. Apply transformations
  5. Validate output quality
  6. Log cleaning statistics

3. CSV Query with DuckDB

  1. Point DuckDB at CSV file(s)
  2. Let DuckDB infer schema
  3. Write SQL queries directly
  4. Export results to new CSV
  5. Optionally persist as Parquet

Best Practices

  • Always specify encoding explicitly
  • Use chunked reading for large files
  • Profile before choosing tools
  • Preserve original files, write to new
  • Validate row counts before/after
  • Handle quoted fields and escapes properly

Anti-Patterns

Anti-PatternProblemCorrect Approach
Loading all to memoryOOM on large filesUse chunking or streaming
Guessing encodingCorrupted charactersDetect with chardet first
Ignoring quotingBroken field parsingUse proper CSV parser
No validationSilent data corruptionValidate row/column counts
Manual string splittingBreaks on edge casesUse csv module or pandas

GitHub リポジトリ

majiayu000/claude-skill-registry
パス: skills/csv-data-wrangler-skill

関連スキル

content-collections

メタ

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

スキルを見る

polymarket

メタ

This skill enables developers to build applications with the Polymarket prediction markets platform, including API integration for trading and market data. It also provides real-time data streaming via WebSocket to monitor live trades and market activity. Use it for implementing trading strategies or creating tools that process live market updates.

スキルを見る

hybrid-cloud-networking

メタ

This skill configures secure hybrid cloud networking between on-premises infrastructure and cloud platforms like AWS, Azure, and GCP. Use it when connecting data centers to the cloud, building hybrid architectures, or implementing secure cross-premises connectivity. It supports key capabilities such as VPNs and dedicated connections like AWS Direct Connect for high-performance, reliable setups.

スキルを見る

llamaindex

メタ

LlamaIndex is a data framework for building RAG-powered LLM applications, specializing in document ingestion, indexing, and querying. It provides key features like vector indices, query engines, and agents, and supports over 300 data connectors. Use it for document Q&A, chatbots, and knowledge retrieval when building data-centric applications.

スキルを見る