Back to Skills

continuous-learning

majiayu000
Updated Today
1 views
58
9
58
View on GitHub
Metaaiautomation

About

The continuous-learning skill automatically analyzes Claude Code sessions to identify reusable patterns and save them as learned skills for future use. It runs as a stop hook after each session, extracting useful techniques like error resolutions and debugging methods. Developers should enable this skill to build a personalized knowledge base from their coding interactions over time.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/majiayu000/claude-skill-registry
Git CloneAlternative
git clone https://github.com/majiayu000/claude-skill-registry.git ~/.claude/skills/continuous-learning

Copy and paste this command in Claude Code to install this skill

Documentation

持續學習技能

自動評估 Claude Code 工作階段結束時的內容,提取可重用模式並儲存為學習技能。

運作方式

此技能作為 Stop hook 在每個工作階段結束時執行:

  1. 工作階段評估:檢查工作階段是否有足夠訊息(預設:10+ 則)
  2. 模式偵測:從工作階段識別可提取的模式
  3. 技能提取:將有用模式儲存到 ~/.claude/skills/learned/

設定

編輯 config.json 以自訂:

{
  "min_session_length": 10,
  "extraction_threshold": "medium",
  "auto_approve": false,
  "learned_skills_path": "~/.claude/skills/learned/",
  "patterns_to_detect": [
    "error_resolution",
    "user_corrections",
    "workarounds",
    "debugging_techniques",
    "project_specific"
  ],
  "ignore_patterns": [
    "simple_typos",
    "one_time_fixes",
    "external_api_issues"
  ]
}

模式類型

模式描述
error_resolution特定錯誤如何被解決
user_corrections來自使用者修正的模式
workarounds框架/函式庫怪異問題的解決方案
debugging_techniques有效的除錯方法
project_specific專案特定慣例

Hook 設定

新增到你的 ~/.claude/settings.json

{
  "hooks": {
    "Stop": [{
      "matcher": "*",
      "hooks": [{
        "type": "command",
        "command": "~/.claude/skills/continuous-learning/evaluate-session.sh"
      }]
    }]
  }
}

為什麼用 Stop Hook?

  • 輕量:工作階段結束時只執行一次
  • 非阻塞:不會為每則訊息增加延遲
  • 完整上下文:可存取完整工作階段記錄

相關

  • Longform Guide - 持續學習章節
  • /learn 指令 - 工作階段中手動提取模式

比較筆記(研究:2025 年 1 月)

vs Homunculus (github.com/humanplane/homunculus)

Homunculus v2 採用更複雜的方法:

功能我們的方法Homunculus v2
觀察Stop hook(工作階段結束)PreToolUse/PostToolUse hooks(100% 可靠)
分析主要上下文背景 agent(Haiku)
粒度完整技能原子「本能」
信心0.3-0.9 加權
演化直接到技能本能 → 聚類 → 技能/指令/agent
分享匯出/匯入本能

來自 homunculus 的關鍵見解:

"v1 依賴技能進行觀察。技能是機率性的——它們觸發約 50-80% 的時間。v2 使用 hooks 進行觀察(100% 可靠),並以本能作為學習行為的原子單位。"

潛在 v2 增強

  1. 基於本能的學習 - 較小的原子行為,帶信心評分
  2. 背景觀察者 - Haiku agent 並行分析
  3. 信心衰減 - 如果被矛盾則本能失去信心
  4. 領域標記 - code-style、testing、git、debugging 等
  5. 演化路徑 - 將相關本能聚類為技能/指令

參見:/Users/affoon/Documents/tasks/12-continuous-learning-v2.md 完整規格。

GitHub Repository

majiayu000/claude-skill-registry
Path: skills/continuous-learning

Related Skills

content-collections

Meta

This skill provides a production-tested setup for Content Collections, a TypeScript-first tool that transforms Markdown/MDX files into type-safe data collections with Zod validation. Use it when building blogs, documentation sites, or content-heavy Vite + React applications to ensure type safety and automatic content validation. It covers everything from Vite plugin configuration and MDX compilation to deployment optimization and schema validation.

View skill

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill