clawdis-nodes
About
This skill enables developers to discover and target specific Clawdis-paired devices (nodes) via CLI commands. It helps agents list available nodes, inspect their capabilities/permissions, and select the best target machine for actions. Use it when you need to reason about device availability and choose an appropriate node for canvas, camera, or system operations.
Documentation
Clawdis Nodes
Use the node system to target specific devices (macOS node mode, iOS, Android) for canvas/camera/screen/system actions. Use presence to infer which user machine is active, then pick the matching node.
Quick start
List known nodes and whether they are paired/connected:
clawdis nodes status
Inspect a specific node (commands, caps, permissions):
clawdis nodes describe --node <idOrNameOrIp>
Node discovery workflow (agent)
- List nodes with
clawdis nodes status. - Choose a target:
- Prefer
connectednodes with the capabilities you need. - Use
perms(permissions map) to avoid asking for actions that will fail.
- Prefer
- Confirm commands with
clawdis nodes describe --node …. - Invoke actions via
clawdis nodes …(camera, canvas, screen, system).
If no nodes are connected:
- Check pairing:
clawdis nodes pending/clawdis nodes list - Ask the user to open/foreground the node app if the action requires it (canvas/camera/screen on iOS/Android).
Presence vs nodes (don’t confuse them)
Presence shows Gateway + connected clients (mac app, WebChat, CLI).
Nodes are paired devices that expose commands.
Use presence to infer where the user is active, then map that to a node:
clawdis gateway call system-presence
Heuristics:
- Pick the presence entry with the smallest
lastInputSeconds(most active). - Match presence
host/deviceFamilyto a nodedisplayName/deviceFamily. - If multiple matches, ask for clarification or use
nodes describeto choose.
Note: CLI connections (client.mode=cli) do not show up in presence.
Tailnet / Tailscale (optional context)
Node discovery is Gateway‑owned; Tailnet details only matter for reaching the Gateway:
- On LAN, the Gateway advertises a Bridge via Bonjour.
- Cross‑network, prefer Tailnet MagicDNS or Tailnet IP to reach the Gateway.
- Once connected, always target nodes by id/name/IP via the Gateway (not direct).
Pairing & approvals
List pairing requests:
clawdis nodes pending
Approve/reject:
clawdis nodes approve <requestId>
clawdis nodes reject <requestId>
Typical agent usages
Send a notification to a specific Mac node:
clawdis nodes notify --node <idOrNameOrIp> --title "Ping" --body "Gateway ready"
Capture a node canvas snapshot:
clawdis nodes canvas snapshot --node <idOrNameOrIp> --format png
Troubleshooting
NODE_BACKGROUND_UNAVAILABLE: the node app must be foregrounded (iOS/Android).- Missing permissions in
nodes status: ask the user to grant permissions in the node app. - No connected nodes: ensure the Gateway is reachable; check tailnet/SSH config if remote.
Quick Install
/plugin add https://github.com/steipete/clawdis/tree/main/clawdis-nodesCopy and paste this command in Claude Code to install this skill
GitHub 仓库
Related Skills
sglang
MetaSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
llamaguard
OtherLlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
