Back to Skills

adk-deployment-specialist

jeremylongshore
Updated Today
86 views
409
51
409
View on GitHub
Documentationaidata

About

This skill manages A2A (Agent-to-Agent) communication for orchestrating Vertex AI ADK agents, handling AgentCard discovery, task submission, and status polling. It enables developers to build and deploy multi-agent systems on Google's Agent Engine with capabilities for agent orchestration and session management. Key features include an observability dashboard, Cloud Trace integration, and BigQuery analytics export for production monitoring.

Documentation

What This Skill Does

Expert in building and deploying production multi-agent systems using Google's Agent Development Kit (ADK). Handles agent orchestration (Sequential, Parallel, Loop), A2A protocol communication, Code Execution Sandbox for GCP operations, Memory Bank for stateful conversations, and deployment to Vertex AI Agent Engine.

Core Capabilities

  1. ADK Agent Creation: Build agents in Python (stable), Java (0.3.0), or Go (Nov 2025)
  2. Multi-Agent Orchestration: Sequential/Parallel/Loop agent patterns
  3. A2A Protocol Management: Agent-to-Agent communication and task delegation
  4. Code Execution: Secure sandbox for running gcloud commands and Python/Go code
  5. Memory Bank: Persistent conversation memory across sessions (14-day TTL)
  6. Production Deployment: One-command deployment with adk deploy
  7. Observability: Agent Engine UI dashboards, token tracking, error monitoring

When This Skill Activates

Trigger Phrases

  • "Deploy ADK agent to Agent Engine"
  • "Create multi-agent system with ADK"
  • "Implement A2A protocol"
  • "Use Code Execution Sandbox"
  • "Set up Memory Bank for agent"
  • "Orchestrate multiple agents"
  • "Build ADK agent in Python/Java/Go"
  • "Deploy to Vertex AI Agent Engine"

Use Case Patterns

  • Building GCP deployment automation agents
  • Creating RAG agents with LangChain integration
  • Orchestrating Genkit flows with ADK supervisors
  • Implementing stateful conversational agents
  • Deploying secure code execution environments

How It Works

Phase 1: Agent Architecture Design

User Request → Analyze:
- Single agent vs multi-agent system?
- Tools needed (Code Exec, Memory Bank, custom tools)?
- Orchestration pattern (Sequential, Parallel, Loop)?
- Integration with LangChain/Genkit?
- Deployment target (local, Agent Engine, Cloud Run)?

Phase 2: ADK Agent Implementation

Simple Agent (Python):

from google import adk

# Define agent with tools
agent = adk.Agent(
    model="gemini-2.5-flash",
    tools=[
        adk.tools.CodeExecution(),  # Secure sandbox
        adk.tools.MemoryBank(),     # Persistent memory
    ],
    system_instruction="""
You are a GCP deployment specialist.
Help users deploy resources securely using gcloud commands.
    """
)

# Run agent
response = agent.run("Deploy a GKE cluster named prod in us-central1")
print(response)

Multi-Agent Orchestrator (Python):

from google import adk

# Define specialized sub-agents
validator_agent = adk.Agent(
    model="gemini-2.5-flash",
    system_instruction="Validate GCP configurations"
)

deployer_agent = adk.Agent(
    model="gemini-2.5-flash",
    tools=[adk.tools.CodeExecution()],
    system_instruction="Deploy validated GCP resources"
)

monitor_agent = adk.Agent(
    model="gemini-2.5-flash",
    system_instruction="Monitor deployment status"
)

# Orchestrate with Sequential pattern
orchestrator = adk.SequentialAgent(
    agents=[validator_agent, deployer_agent, monitor_agent],
    system_instruction="Coordinate validation → deployment → monitoring"
)

result = orchestrator.run("Deploy a production GKE cluster")

Phase 3: Code Execution Integration

The Code Execution Sandbox provides:

  • Security: Isolated environment, no access to your system
  • State Persistence: 14-day memory, configurable TTL
  • Stateful Sessions: Builds on previous executions
# Agent with Code Execution
agent = adk.Agent(
    model="gemini-2.5-flash",
    tools=[adk.tools.CodeExecution()],
    system_instruction="""
Execute gcloud commands in the secure sandbox.
Remember previous operations in this session.
    """
)

# Turn 1: Create cluster
agent.run("Create GKE cluster named dev-cluster with 3 nodes")
# Sandbox executes: gcloud container clusters create dev-cluster --num-nodes=3

# Turn 2: Deploy to cluster (remembers cluster from Turn 1)
agent.run("Deploy my-app:latest to that cluster")
# Sandbox remembers dev-cluster, executes kubectl commands

Phase 4: Memory Bank Integration

Persistent conversation memory across sessions:

agent = adk.Agent(
    model="gemini-2.5-flash",
    tools=[adk.tools.MemoryBank()],
    system_instruction="Remember user preferences and project context"
)

# Session 1 (Monday)
agent.run("I prefer deploying to us-central1 region", session_id="user-123")

# Session 2 (Wednesday) - same session_id
agent.run("Deploy a Cloud Run service", session_id="user-123")
# Agent remembers: uses us-central1 automatically

Phase 5: A2A Protocol Deployment

Deploy agent to Agent Engine with A2A endpoint:

# Install ADK
pip install google-adk

# Deploy with one command
adk deploy \
  --agent-file agent.py \
  --project-id my-project \
  --region us-central1 \
  --service-name gcp-deployer-agent

Agent Engine creates:

  • A2A Endpoint: https://gcp-deployer-agent-{hash}.run.app
  • AgentCard: /.well-known/agent-card metadata
  • Task API: /v1/tasks:send for task submission
  • Status API: /v1/tasks/{task_id} for polling

Phase 6: Calling from Claude

Once deployed, Claude can invoke via A2A protocol:

# In Claude Code plugin / external script
import requests

def invoke_adk_agent(message, session_id=None):
    """
    Call deployed ADK agent via A2A protocol.
    """
    response = requests.post(
        "https://gcp-deployer-agent-xyz.run.app/v1/tasks:send",
        json={
            "message": message,
            "session_id": session_id or "claude-session-123",
            "config": {
                "enable_code_execution": True,
                "enable_memory_bank": True,
            }
        },
        headers={"Authorization": f"Bearer {get_token()}"}
    )

    return response.json()

# Use from Claude
result = invoke_adk_agent("Deploy GKE cluster named prod-api")

Workflow Examples

Example 1: GCP Deployment Agent

User: "Create an ADK agent that deploys GCP resources"

Implementation:

from google import adk

deployment_agent = adk.Agent(
    model="gemini-2.5-flash",
    tools=[
        adk.tools.CodeExecution(),
        adk.tools.MemoryBank(),
    ],
    system_instruction="""
You are a GCP deployment specialist.

CAPABILITIES:
- Deploy GKE clusters
- Deploy Cloud Run services
- Deploy Vertex AI Pipelines
- Manage IAM permissions
- Monitor deployments

SECURITY:
- Validate all configurations before deployment
- Use least-privilege IAM
- Log all operations
- Never expose credentials
    """
)

# Deploy to Agent Engine
# $ adk deploy --agent-file deployment_agent.py --service-name gcp-deployer

Example 2: Multi-Agent RAG System

User: "Build a RAG system with ADK orchestrating a LangChain retriever"

Implementation:

from google import adk
from langchain.retrievers import VertexAISearchRetriever

# Sub-Agent 1: LangChain RAG
class RAGAgent(adk.Agent):
    def __init__(self):
        self.retriever = VertexAISearchRetriever(...)
        super().__init__(model="gemini-2.5-flash")

    def retrieve_docs(self, query):
        return self.retriever.get_relevant_documents(query)

# Sub-Agent 2: ADK Answer Generator
answer_agent = adk.Agent(
    model="gemini-2.5-pro",  # More powerful for final answer
    system_instruction="Generate comprehensive answers from retrieved docs"
)

# Orchestrator
orchestrator = adk.SequentialAgent(
    agents=[RAGAgent(), answer_agent],
    system_instruction="First retrieve docs, then generate answer"
)

Example 3: Async Deployment with Status Polling

User: "Deploy a GKE cluster and monitor progress"

Implementation:

# Submit async task
task_response = invoke_adk_agent(
    "Deploy GKE cluster named prod-api with 5 nodes in us-central1"
)

task_id = task_response["task_id"]
print(f"✅ Task submitted: {task_id}")

# Poll for status
import time
while True:
    status = requests.get(
        f"https://gcp-deployer-agent-xyz.run.app/v1/tasks/{task_id}",
        headers={"Authorization": f"Bearer {get_token()}"}
    ).json()

    if status["status"] == "SUCCESS":
        print(f"✅ Cluster deployed!")
        break
    elif status["status"] == "FAILURE":
        print(f"❌ Deployment failed: {status['error']}")
        break
    else:
        print(f"⏳ Status: {status['status']} ({status.get('progress', 0)*100}%)")
        time.sleep(10)

Production Best Practices

  1. Agent Identities: ADK agents get Native Agent Identities (IAM principals)
  2. Least Privilege: Grant minimum required permissions
  3. VPC Service Controls: Enable for enterprise security
  4. Model Armor: Protects against prompt injection
  5. Session Management: Use consistent session_ids for Memory Bank
  6. Error Handling: Implement retries with exponential backoff
  7. Observability: Monitor via Agent Engine UI dashboard

Tool Permissions

  • Read: Analyze existing agent code
  • Write: Create new agent files
  • Edit: Modify agent configurations
  • Grep: Find integration points
  • Glob: Locate related files
  • Bash: Install ADK, deploy agents, run tests

Integration Patterns

ADK + Genkit

# Use Genkit for flows, ADK for orchestration
genkit_flow_agent = create_genkit_flow()
orchestrator = adk.SequentialAgent(
    agents=[validator, genkit_flow_agent, monitor]
)

ADK + LangChain

# LangChain for RAG, ADK for multi-agent coordination
langchain_rag = create_langchain_retriever()
orchestrator = adk.ParallelAgent(
    agents=[langchain_rag, fact_checker, answer_generator]
)

Deployment Commands

# Install ADK
pip install google-adk  # Python
go get google.golang.org/adk  # Go

# Deploy to Agent Engine
adk deploy \
  --agent-file my_agent.py \
  --project-id my-project \
  --region us-central1 \
  --service-name my-agent

# Deploy to Cloud Run (custom)
gcloud run deploy my-agent \
  --source . \
  --region us-central1

# Deploy locally for testing
adk run --agent-file my_agent.py

Version History

  • 1.0.0 (2025): ADK Preview with Python/Java/Go support, Agent Engine GA, Code Execution Sandbox, Memory Bank

References

Quick Install

/plugin add https://github.com/jeremylongshore/claude-code-plugins-plus/tree/main/adk-deployment-specialist

Copy and paste this command in Claude Code to install this skill

GitHub 仓库

jeremylongshore/claude-code-plugins-plus
Path: plugins/ai-ml/jeremy-adk-orchestrator/skills/adk-deployment-specialist
aiautomationclaude-codedevopsmarketplacemcp

Related Skills

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill