MCP HubMCP Hub
スキル一覧に戻る

cursor-ai-chat

majiayu000
更新日 2 days ago
22 閲覧
58
9
58
GitHubで表示
開発ai

について

このスキルは、「cursor chat」や「ask cursor」などのフレーズで起動し、開発者がCursor AIのチャットインターフェースを活用してコード支援を受ける方法を習得するのに役立ちます。効果的なプロンプトの作成、@メンションを用いたコンテキスト管理、最適なAI応答を得るためのテクニックをカバーしています。Cursor内で作業中にコード関連の質問や対話を改善したい際にご利用ください。

クイックインストール

Claude Code

推奨
プラグインコマンド推奨
/plugin add https://github.com/majiayu000/claude-skill-registry
Git クローン代替
git clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/cursor-ai-chat

このコマンドをClaude Codeにコピー&ペーストしてスキルをインストールします

ドキュメント

Cursor Ai Chat

Overview

This skill helps you master the Cursor AI chat interface for code assistance. It covers effective prompting patterns, context management with @-mentions, model selection, and techniques for getting the best responses from AI.

Prerequisites

  • Cursor IDE installed and authenticated
  • Project workspace with code files
  • Understanding of @-mention syntax
  • Basic familiarity with AI prompting

Instructions

  1. Open AI Chat panel (Cmd+L or Ctrl+L)
  2. Select relevant code before asking questions
  3. Use @-mentions to add file context
  4. Ask specific, clear questions
  5. Review and apply suggested code
  6. Use multi-turn conversations for iterative work

Output

  • Code explanations and documentation
  • Generated code snippets
  • Debugging assistance
  • Refactoring suggestions
  • Code review feedback

Error Handling

See {baseDir}/references/errors.md for comprehensive error handling.

Examples

See {baseDir}/references/examples.md for detailed examples.

Resources

GitHub リポジトリ

majiayu000/claude-skill-registry
パス: skills/cursor-ai-chat

関連スキル

evaluating-llms-harness

テスト

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

スキルを見る

sglang

メタ

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

スキルを見る

langchain

メタ

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

スキルを見る

cloudflare-turnstile

メタ

This skill provides comprehensive guidance for implementing Cloudflare Turnstile as a CAPTCHA-alternative bot protection system. It covers integration for forms, login pages, API endpoints, and frameworks like React/Next.js/Hono, while handling invisible challenges that maintain user experience. Use it when migrating from reCAPTCHA, debugging error codes, or implementing token validation and E2E tests.

スキルを見る