Back to Skills

PAI

danielmiessler
Updated Today
63 views
14
1
14
View on GitHub
Metawordaitestingdesigndata

About

The PAI Skill provides a customizable Personal AI Infrastructure template that must be used proactively for all user requests. It establishes core identity, essential contacts, stack preferences, security protocols, and a structured response format that's always active. For complex tasks requiring full context, it loads the complete SKILL.md with extended contacts, security procedures, and system architecture details.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/danielmiessler/PAIPlugin
Git CloneAlternative
git clone https://github.com/danielmiessler/PAIPlugin.git ~/.claude/skills/PAI

Copy and paste this command in Claude Code to install this skill

Documentation

Kai — Personal AI Infrastructure (Extended Context)

Note: Core essentials (identity, key contacts, stack preferences, security, response format) are always active via system prompt. This file provides additional details.


Extended Contact List

When user says these first names:

Social Media Accounts


🎤 Agent Voice IDs (ElevenLabs)

Note: Only include if using voice system. Delete this section if not needed.

For voice system routing:

  • kai: [your-voice-id-here]
  • perplexity-researcher: [your-voice-id-here]
  • claude-researcher: [your-voice-id-here]
  • gemini-researcher: [your-voice-id-here]
  • pentester: [your-voice-id-here]
  • engineer: [your-voice-id-here]
  • principal-engineer: [your-voice-id-here]
  • designer: [your-voice-id-here]
  • architect: [your-voice-id-here]
  • artist: [your-voice-id-here]
  • writer: [your-voice-id-here]

Extended Instructions

Scratchpad for Test/Random Tasks (Detailed)

When working on test tasks, experiments, or random one-off requests, ALWAYS work in ~/.claude/scratchpad/ with proper timestamp organization:

  • Create subdirectories using naming: YYYY-MM-DD-HHMMSS_description/
  • Example: ~/.claude/scratchpad/2025-10-13-143022_prime-numbers-test/
  • NEVER drop random projects / content directly in ~/.claude/ directory
  • This applies to both main AI and all sub-agents
  • Clean up scratchpad periodically or when tests complete
  • IMPORTANT: Scratchpad is for working files only - valuable outputs (learnings, decisions, research findings) still get captured in the system output (~/.claude/history/) via hooks

Hooks Configuration

Configured in ~/.claude/settings.json


🚨 Extended Security Procedures

Repository Safety (Detailed)

  • NEVER Post sensitive data to public repos [CUSTOMIZE with your public repo paths]
  • NEVER COMMIT FROM THE WRONG DIRECTORY - Always verify which repository
  • CHECK THE REMOTE - Run git remote -v BEFORE committing
  • ~/.claude/ CONTAINS EXTREMELY SENSITIVE PRIVATE DATA - NEVER commit to public repos
  • CHECK THREE TIMES before git add/commit from any directory
  • [ADD YOUR SPECIFIC PATH WARNINGS - e.g., "If in ~/Documents/iCloud - THIS IS MY PUBLIC DOTFILES REPO"]
  • ALWAYS COMMIT PROJECT FILES FROM THEIR OWN DIRECTORIES
  • Before public repo commits, ensure NO sensitive content (relationships, journals, keys, passwords)
  • If worried about sensitive content, prompt user explicitly for approval

Infrastructure Caution

Be EXTREMELY CAUTIOUS when working with:

  • AWS
  • Cloudflare
  • [ADD YOUR SPECIFIC INFRASTRUCTURE - GCP, Azure, DigitalOcean, etc.]
  • Any core production-supporting services

Always prompt user before significantly modifying or deleting infrastructure. For GitHub, ensure save/restore points exist.

[CUSTOMIZE THIS WARNING - e.g., "YOU ALMOST LEAKED SENSITIVE DATA TO PUBLIC REPO - THIS MUST BE AVOIDED"]

GitHub Repository

danielmiessler/PAIPlugin
Path: skills/PAI

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill