Back to Skills

web-cli-teleport

DNYoussef
Updated 3 days ago
3 views
9
2
9
View on GitHub
Otherai

About

This skill enables seamless switching between web browser and command-line interface tasks while maintaining synchronized state and credentials. It provides safe execution of commands with proper routing, safety constraints, and audit trails for reproducible workflows. Developers should use it when needing to mirror actions between browser and terminal environments or fetch web artifacts for CLI processing.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/DNYoussef/context-cascade
Git CloneAlternative
git clone https://github.com/DNYoussef/context-cascade.git ~/.claude/skills/web-cli-teleport

Copy and paste this command in Claude Code to install this skill

Documentation

L1 Improvement

  • Reframed the teleport skill with Prompt Architect clarity and Skill Forge guardrails.
  • Added explicit routing, safety constraints, and memory tagging.
  • Clarified output expectations and confidence ceilings.

STANDARD OPERATING PROCEDURE

Purpose

Bridge web and CLI tasks safely—execute commands, capture outputs, and synchronize state while respecting permissions and auditability.

Trigger Conditions

  • Positive: need to mirror actions between browser and terminal, fetch artifacts, or reproduce web steps in CLI.
  • Negative: high-risk admin operations without approvals; route to platform specialists.

Guardrails

  • Structure-first docs maintained (SKILL, README, process diagram).
  • Respect credential boundaries; never store secrets in outputs.
  • Enforce safety prompts for destructive commands; prefer dry-runs first.
  • Confidence ceilings on inferred states; cite observed outputs.
  • Memory tagging for session actions.

Execution Phases

  1. Intent & Scope – Define goal, environments, and constraints (read-only vs write, network limits).
  2. Context Sync – Capture current web state (URL, form data) and CLI state (cwd, env); note assumptions.
  3. Plan – Map steps across web/CLI; identify risky actions and mitigations.
  4. Execute – Perform actions with logging; use dry-run or safe flags; verify after each step.
  5. Validate – Confirm state convergence (files, configs, outputs); capture evidence.
  6. Deliver – Summarize actions, artifacts, and confidence line; store session memory.

Output Format

  • Goal, environments, actions taken (web + CLI) with evidence and timestamps.
  • Risks handled, remaining gaps, and next steps.
  • Memory namespace and confidence: X.XX (ceiling: TYPE Y.YY).

Validation Checklist

  • Permissions/credentials confirmed; secrets not logged.
  • Risky commands gated or dry-run first.
  • Web and CLI states reconciled; evidence captured.
  • Memory tagged; confidence ceiling declared.

Integration

  • Process: see web-cli-teleport-process.dot for flow.
  • Memory MCP: skills/tooling/web-cli-teleport/{project}/{timestamp} for session logs.
  • Hooks: follow Skill Forge latency bounds; abort on safety violations.

Confidence: 0.70 (ceiling: inference 0.70) – SOP aligned to Prompt Architect clarity and Skill Forge safeguards.

GitHub Repository

DNYoussef/context-cascade
Path: skills/tooling/web-cli-teleport

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill