Back to Skills

always-init

danielmiessler
Updated Yesterday
30 views
14
1
14
View on GitHub
Otheraiautomation

About

The always-init skill automatically loads complete PAI context for every user request, ensuring contacts, preferences, and security protocols are available before processing tasks. It must be invoked first for all user messages as a P0 priority requirement. This universal initializer guarantees full context availability across all subsequent operations.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/danielmiessler/PAIPlugin
Git CloneAlternative
git clone https://github.com/danielmiessler/PAIPlugin.git ~/.claude/skills/always-init

Copy and paste this command in Claude Code to install this skill

Documentation

ALWAYS-INIT - Universal Task Initializer


⚠️ CRITICAL SYSTEM REQUIREMENT ⚠️

AUTOMATIC INVOCATION REQUIRED: This skill MUST be automatically invoked as the FIRST action for EVERY user request without exception.

Trigger Pattern: ALL user messages/requests Priority Level: P0 (Highest - blocks all other operations) Automation Rule: if (userRequest) → invoke("always-init") → then processRequest()


🎯 Purpose

Universal context loader providing complete PAI (Personal AI Infrastructure):

  • 👥 Contacts: Alice, Bob, Carol, Dave, team members
  • 🛠️ Stack Preferences: TypeScript>Python, bun>npm, uv>pip
  • 🔒 Security Protocols: Repository safety, credential handling
  • 📝 Response Format: Structured output standards
  • 🎙️ Voice Routing: ElevenLabs voice IDs for agents
  • ⚙️ Operating Instructions: Personal preferences and workflows

❌ Consequences of Skipping

Without ALWAYS-INIT context loading:

  • Responses lack relationship/contact awareness
  • Wrong technical stacks used (npm instead of bun, pip instead of uv)
  • Security protocols violated
  • Incorrect response formatting
  • Missing personalization
  • Context-dependent decisions fail
  • Agent routing fails

🔄 Implementation Protocol

For AI Assistant (Claude):

1. Receive user request
2. ⚡ IMMEDIATELY: Skill("always-init")  ← THIS STEP
3. Wait for PAI context to load
4. THEN proceed with request processing

For Skill System (Automation):

// Automatic prepend to request pipeline
function handleUserRequest(request) {
  await invokeSkill("always-init");  // MANDATORY FIRST STEP
  const paiContext = loadContext();
  return processWithContext(request, paiContext);
}

🎯 Success Criteria

This skill is working correctly when:

  • ✅ Invoked before ANY tool use
  • ✅ Invoked before ANY other skill
  • ✅ Invoked before ANY response formulation
  • ✅ Invoked for 100% of user requests
  • ✅ PAI context available in all subsequent operations

📍 Context Loading

Execute: read ~/.claude/skills/PAI/SKILL.md

This provides the complete Personal AI Infrastructure context that all skills and responses depend on.

Once PAI context is loaded, proceed immediately with the user's actual request.

🏗️ Design Philosophy

This skill implements a "context-first" architecture where PAI loads universally rather than being distributed across individual skills. It acts as a bootstrap loader that:

  1. Triggers on every user interaction
  2. Loads PAI context once
  3. Gets out of the way
  4. Allows the actual task to proceed

This eliminates the need for individual skills to manually load PAI context and ensures consistent, complete context availability across all operations.


💡 Implementation Note:

Ideally, this skill should be hardcoded into the request handler rather than relying on manual invocation. The skill system should automatically prepend this to every request pipeline.

Alternative Approach: Add to system prompt: "Before responding to ANY user request, you MUST first invoke the always-init skill to load PAI context."

GitHub Repository

danielmiessler/PAIPlugin
Path: skills/always-init

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill