MCP HubMCP Hub
スキル一覧に戻る

sympy

K-Dense-AI
更新日 Today
73 閲覧
7,676
906
7,676
GitHubで表示
メタai

について

このスキルは、Pythonで記号数学を扱う際に使用してください。記号計算タスクに適用するスキルであり、具体的には方程式の代数的解法、微積分演算(微分、積分、極限)、代数式の操作、行列の記号的処理、物理学計算、数論問題、幾何学計算、数式からの実行可能コード生成などが含まれます。ユーザーが数値近似ではなく正確な記号結果を必要とする場合や、変数やパラメータを含む数式を扱う場合にこのスキルを適用してください。

クイックインストール

Claude Code

推奨
プラグインコマンド推奨
/plugin add https://github.com/K-Dense-AI/claude-scientific-skills
Git クローン代替
git clone https://github.com/K-Dense-AI/claude-scientific-skills.git ~/.claude/skills/sympy

このコマンドをClaude Codeにコピー&ペーストしてスキルをインストールします

GitHub リポジトリ

K-Dense-AI/claude-scientific-skills
パス: scientific-packages/sympy
ai-scientistbioinformaticschemoinformaticsclaudeclaude-skillsclaudecode

関連スキル

evaluating-llms-harness

テスト

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

スキルを見る

sglang

メタ

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

スキルを見る

cloudflare-turnstile

メタ

This skill provides comprehensive guidance for implementing Cloudflare Turnstile as a CAPTCHA-alternative bot protection system. It covers integration for forms, login pages, API endpoints, and frameworks like React/Next.js/Hono, while handling invisible challenges that maintain user experience. Use it when migrating from reCAPTCHA, debugging error codes, or implementing token validation and E2E tests.

スキルを見る

langchain

メタ

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

スキルを見る