Back to Skills

google-gemini-embeddings

jezweb
Updated Today
25 views
33
4
33
View on GitHub
Metawordaitestingapidesign

About

This skill provides complete coverage of the Google Gemini embeddings API for building RAG systems, semantic search, and similarity matching. It includes SDK usage, batch processing, 8 task types, and Cloudflare Vectorize integration while preventing common embedding errors. Use it when implementing vector search or retrieval-augmented generation with Google's embedding models.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/jezweb/claude-skills
Git CloneAlternative
git clone https://github.com/jezweb/claude-skills.git ~/.claude/skills/google-gemini-embeddings

Copy and paste this command in Claude Code to install this skill

Documentation

Google Gemini Embeddings

Complete production-ready guide for Google Gemini embeddings API

This skill provides comprehensive coverage of the gemini-embedding-001 model for generating text embeddings, including SDK usage, REST API patterns, batch processing, RAG integration with Cloudflare Vectorize, and advanced use cases like semantic search and document clustering.


Table of Contents

  1. Quick Start
  2. gemini-embedding-001 Model
  3. Basic Embeddings
  4. Batch Embeddings
  5. Task Types
  6. RAG Patterns
  7. Semantic Search
  8. Document Clustering
  9. Error Handling
  10. Best Practices

1. Quick Start

Installation

Install the Google Generative AI SDK:

npm install @google/genai@^1.27.0

For TypeScript projects:

npm install -D typescript@^5.0.0

Environment Setup

Set your Gemini API key as an environment variable:

export GEMINI_API_KEY="your-api-key-here"

Get your API key from: https://aistudio.google.com/apikey

First Embedding Example

import { GoogleGenAI } from "@google/genai";

const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

const response = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: 'What is the meaning of life?',
  config: {
    taskType: 'RETRIEVAL_QUERY',
    outputDimensionality: 768
  }
});

console.log(response.embedding.values); // [0.012, -0.034, ...]
console.log(response.embedding.values.length); // 768

Result: A 768-dimension embedding vector representing the semantic meaning of the text.


2. gemini-embedding-001 Model

Model Specifications

Current Model: gemini-embedding-001 (stable, production-ready)

  • Status: Stable
  • Experimental: gemini-embedding-exp-03-07 (deprecated October 2025, do not use)

Dimensions

The model supports flexible output dimensionality using Matryoshka Representation Learning:

DimensionUse CaseStoragePerformance
768Recommended for most use casesLowFast
1536Balance between accuracy and efficiencyMediumMedium
3072Maximum accuracy (default)HighSlower
128-3071Custom (any value in range)VariableVariable

Default: 3072 dimensions Recommended: 768, 1536, or 3072 for optimal performance

Context Window

  • Input Limit: 2,048 tokens per text
  • Input Type: Text only (no images, audio, or video)

Rate Limits

TierRPMTPMRPDRequirements
Free10030,0001,000No billing account
Tier 13,0001,000,000-Billing account linked
Tier 25,0005,000,000-$250+ spending, 30-day wait
Tier 310,00010,000,000-$1,000+ spending, 30-day wait

RPM = Requests Per Minute TPM = Tokens Per Minute RPD = Requests Per Day

Output Format

{
  embedding: {
    values: number[] // Array of floating-point numbers
  }
}

3. Basic Embeddings

SDK Approach (Node.js)

Single text embedding:

import { GoogleGenAI } from "@google/genai";

const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

const response = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: 'The quick brown fox jumps over the lazy dog',
  config: {
    taskType: 'SEMANTIC_SIMILARITY',
    outputDimensionality: 768
  }
});

console.log(response.embedding.values);
// [0.00388, -0.00762, 0.01543, ...]

Fetch Approach (Cloudflare Workers)

For Workers/edge environments without SDK support:

export default {
  async fetch(request: Request, env: Env): Promise<Response> {
    const apiKey = env.GEMINI_API_KEY;
    const text = "What is the meaning of life?";

    const response = await fetch(
      'https://generativelanguage.googleapis.com/v1beta/models/gemini-embedding-001:embedContent',
      {
        method: 'POST',
        headers: {
          'x-goog-api-key': apiKey,
          'Content-Type': 'application/json'
        },
        body: JSON.stringify({
          content: {
            parts: [{ text }]
          },
          taskType: 'RETRIEVAL_QUERY',
          outputDimensionality: 768
        })
      }
    );

    const data = await response.json();

    // Response format:
    // {
    //   embedding: {
    //     values: [0.012, -0.034, ...]
    //   }
    // }

    return new Response(JSON.stringify(data), {
      headers: { 'Content-Type': 'application/json' }
    });
  }
};

Response Parsing

interface EmbeddingResponse {
  embedding: {
    values: number[];
  };
}

const response: EmbeddingResponse = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: 'Sample text',
  config: { taskType: 'SEMANTIC_SIMILARITY' }
});

const embedding: number[] = response.embedding.values;
const dimensions: number = embedding.length; // 3072 by default

4. Batch Embeddings

Multiple Texts in One Request (SDK)

Generate embeddings for multiple texts simultaneously:

import { GoogleGenAI } from "@google/genai";

const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

const texts = [
  "What is the meaning of life?",
  "How does photosynthesis work?",
  "Tell me about the history of the internet."
];

const response = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  contents: texts, // Array of strings
  config: {
    taskType: 'RETRIEVAL_DOCUMENT',
    outputDimensionality: 768
  }
});

// Process each embedding
response.embeddings.forEach((embedding, index) => {
  console.log(`Text ${index}: ${texts[index]}`);
  console.log(`Embedding: ${embedding.values.slice(0, 5)}...`);
  console.log(`Dimensions: ${embedding.values.length}`);
});

Batch REST API (fetch)

Use the batchEmbedContents endpoint:

const response = await fetch(
  'https://generativelanguage.googleapis.com/v1beta/models/gemini-embedding-001:batchEmbedContents',
  {
    method: 'POST',
    headers: {
      'x-goog-api-key': apiKey,
      'Content-Type': 'application/json'
    },
    body: JSON.stringify({
      requests: texts.map(text => ({
        model: 'models/gemini-embedding-001',
        content: {
          parts: [{ text }]
        },
        taskType: 'RETRIEVAL_DOCUMENT'
      }))
    })
  }
);

const data = await response.json();
// data.embeddings: Array of {values: number[]}

Chunking for Rate Limits

When processing large datasets, chunk requests to stay within rate limits:

async function batchEmbedWithRateLimit(
  texts: string[],
  batchSize: number = 100, // Free tier: 100 RPM
  delayMs: number = 60000 // 1 minute delay between batches
): Promise<number[][]> {
  const allEmbeddings: number[][] = [];

  for (let i = 0; i < texts.length; i += batchSize) {
    const batch = texts.slice(i, i + batchSize);

    console.log(`Processing batch ${i / batchSize + 1} (${batch.length} texts)`);

    const response = await ai.models.embedContent({
      model: 'gemini-embedding-001',
      contents: batch,
      config: {
        taskType: 'RETRIEVAL_DOCUMENT',
        outputDimensionality: 768
      }
    });

    allEmbeddings.push(...response.embeddings.map(e => e.values));

    // Wait before next batch (except last batch)
    if (i + batchSize < texts.length) {
      await new Promise(resolve => setTimeout(resolve, delayMs));
    }
  }

  return allEmbeddings;
}

// Usage
const embeddings = await batchEmbedWithRateLimit(documents, 100);

Performance Optimization

Tips:

  1. Use batch API when embedding multiple texts (single request vs multiple requests)
  2. Choose lower dimensions (768) for faster processing and less storage
  3. Implement exponential backoff for rate limit errors
  4. Cache embeddings to avoid redundant API calls

5. Task Types

The taskType parameter optimizes embeddings for specific use cases. Always specify a task type for best results.

Available Task Types (8 total)

Task TypeUse CaseExample
RETRIEVAL_QUERYUser search queries"How do I fix a flat tire?"
RETRIEVAL_DOCUMENTDocuments to be indexed/searchedProduct descriptions, articles
SEMANTIC_SIMILARITYComparing text similarityDuplicate detection, clustering
CLASSIFICATIONCategorizing textsSpam detection, sentiment analysis
CLUSTERINGGrouping similar textsTopic modeling, content organization
CODE_RETRIEVAL_QUERYCode search queries"function to sort array"
QUESTION_ANSWERINGQuestions seeking answersFAQ matching
FACT_VERIFICATIONVerifying claims with evidenceFact-checking systems

When to Use Which

RAG Systems (Retrieval Augmented Generation):

// When embedding user queries
const queryEmbedding = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: userQuery,
  config: { taskType: 'RETRIEVAL_QUERY' } // ← Use RETRIEVAL_QUERY
});

// When embedding documents for indexing
const docEmbedding = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: documentText,
  config: { taskType: 'RETRIEVAL_DOCUMENT' } // ← Use RETRIEVAL_DOCUMENT
});

Semantic Search:

const embedding = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: text,
  config: { taskType: 'SEMANTIC_SIMILARITY' }
});

Document Clustering:

const embedding = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: text,
  config: { taskType: 'CLUSTERING' }
});

Impact on Quality

Using the correct task type significantly improves retrieval quality:

// ❌ BAD: No task type specified
const embedding1 = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: userQuery
});

// ✅ GOOD: Task type specified
const embedding2 = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: userQuery,
  config: { taskType: 'RETRIEVAL_QUERY' }
});

Result: Using the right task type can improve search relevance by 10-30%.


6. RAG Patterns

RAG (Retrieval Augmented Generation) combines vector search with LLM generation to create AI systems that answer questions using custom knowledge bases.

Complete RAG Workflow

1. Document Ingestion
   ├── Chunk documents into smaller pieces
   ├── Generate embeddings (RETRIEVAL_DOCUMENT)
   └── Store in Vectorize

2. Query Processing
   ├── User submits query
   ├── Generate query embedding (RETRIEVAL_QUERY)
   └── Search Vectorize for similar documents

3. Response Generation
   ├── Retrieve top-k similar documents
   ├── Pass documents as context to LLM
   └── Stream response to user

Document Ingestion Pipeline

import { GoogleGenAI } from "@google/genai";

const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

// 1. Chunk document into smaller pieces
function chunkDocument(text: string, chunkSize: number = 500): string[] {
  const words = text.split(' ');
  const chunks: string[] = [];

  for (let i = 0; i < words.length; i += chunkSize) {
    chunks.push(words.slice(i, i + chunkSize).join(' '));
  }

  return chunks;
}

// 2. Generate embeddings for chunks
async function embedChunks(chunks: string[]): Promise<number[][]> {
  const response = await ai.models.embedContent({
    model: 'gemini-embedding-001',
    contents: chunks,
    config: {
      taskType: 'RETRIEVAL_DOCUMENT', // ← Documents for indexing
      outputDimensionality: 768 // ← Match Vectorize index dimensions
    }
  });

  return response.embeddings.map(e => e.values);
}

// 3. Store in Cloudflare Vectorize
async function storeInVectorize(
  env: Env,
  chunks: string[],
  embeddings: number[][]
) {
  const vectors = chunks.map((chunk, i) => ({
    id: `doc-${Date.now()}-${i}`,
    values: embeddings[i],
    metadata: { text: chunk }
  }));

  await env.VECTORIZE.insert(vectors);
}

// Complete pipeline
async function ingestDocument(env: Env, documentText: string) {
  const chunks = chunkDocument(documentText, 500);
  const embeddings = await embedChunks(chunks);
  await storeInVectorize(env, chunks, embeddings);

  console.log(`Ingested ${chunks.length} chunks`);
}

Query Flow (Retrieve + Generate)

async function ragQuery(env: Env, userQuery: string): Promise<string> {
  // 1. Embed user query
  const queryResponse = await ai.models.embedContent({
    model: 'gemini-embedding-001',
    content: userQuery,
    config: {
      taskType: 'RETRIEVAL_QUERY', // ← Query, not document
      outputDimensionality: 768
    }
  });

  const queryEmbedding = queryResponse.embedding.values;

  // 2. Search Vectorize for similar documents
  const results = await env.VECTORIZE.query(queryEmbedding, {
    topK: 5,
    returnMetadata: true
  });

  // 3. Extract context from top results
  const context = results.matches
    .map(match => match.metadata.text)
    .join('\n\n');

  // 4. Generate response with context
  const response = await ai.models.generateContent({
    model: 'gemini-2.5-flash',
    contents: `Context:\n${context}\n\nQuestion: ${userQuery}\n\nAnswer based on the context above:`
  });

  return response.text;
}

Integration with Cloudflare Vectorize

Create Vectorize Index (768 dimensions for Gemini):

npx wrangler vectorize create gemini-embeddings --dimensions 768 --metric cosine

Bind in wrangler.jsonc:

{
  "name": "my-rag-app",
  "main": "src/index.ts",
  "compatibility_date": "2025-10-25",
  "vectorize": {
    "bindings": [
      {
        "binding": "VECTORIZE",
        "index_name": "gemini-embeddings"
      }
    ]
  }
}

Complete RAG Worker:

See templates/rag-with-vectorize.ts for full implementation.


7. Semantic Search

Semantic search finds content based on meaning, not just keyword matching.

Cosine Similarity

Cosine similarity measures how similar two embeddings are (range: -1 to 1, where 1 = identical):

function cosineSimilarity(a: number[], b: number[]): number {
  if (a.length !== b.length) {
    throw new Error('Vectors must have same length');
  }

  let dotProduct = 0;
  let magnitudeA = 0;
  let magnitudeB = 0;

  for (let i = 0; i < a.length; i++) {
    dotProduct += a[i] * b[i];
    magnitudeA += a[i] * a[i];
    magnitudeB += b[i] * b[i];
  }

  if (magnitudeA === 0 || magnitudeB === 0) {
    return 0;
  }

  return dotProduct / (Math.sqrt(magnitudeA) * Math.sqrt(magnitudeB));
}

Vector Normalization

Normalize vectors to unit length for faster similarity calculations:

function normalizeVector(vector: number[]): number[] {
  const magnitude = Math.sqrt(vector.reduce((sum, val) => sum + val * val, 0));

  if (magnitude === 0) {
    return vector;
  }

  return vector.map(val => val / magnitude);
}

// Normalized vectors allow dot product instead of cosine similarity
function dotProduct(a: number[], b: number[]): number {
  return a.reduce((sum, val, i) => sum + val * b[i], 0);
}

Top-K Search

Find the most similar documents to a query:

interface Document {
  id: string;
  text: string;
  embedding: number[];
}

function topKSimilar(
  queryEmbedding: number[],
  documents: Document[],
  k: number = 5
): Array<{ document: Document; similarity: number }> {
  // Calculate similarity for each document
  const similarities = documents.map(doc => ({
    document: doc,
    similarity: cosineSimilarity(queryEmbedding, doc.embedding)
  }));

  // Sort by similarity (descending) and return top K
  return similarities
    .sort((a, b) => b.similarity - a.similarity)
    .slice(0, k);
}

// Usage
const results = topKSimilar(queryEmbedding, documents, 5);

results.forEach(result => {
  console.log(`Similarity: ${result.similarity.toFixed(4)}`);
  console.log(`Text: ${result.document.text}\n`);
});

Complete Semantic Search Example

See templates/semantic-search.ts for full implementation with Gemini API.


8. Document Clustering

Clustering groups similar documents together automatically.

K-Means Clustering

interface Cluster {
  centroid: number[];
  documents: number[][];
}

function kMeansClustering(
  embeddings: number[][],
  k: number = 3,
  maxIterations: number = 100
): Cluster[] {
  // 1. Initialize centroids randomly
  const centroids: number[][] = [];
  for (let i = 0; i < k; i++) {
    centroids.push(embeddings[Math.floor(Math.random() * embeddings.length)]);
  }

  // 2. Iterate until convergence
  for (let iter = 0; iter < maxIterations; iter++) {
    // Assign each embedding to nearest centroid
    const clusters: number[][][] = Array(k).fill(null).map(() => []);

    embeddings.forEach(embedding => {
      let minDistance = Infinity;
      let closestCluster = 0;

      centroids.forEach((centroid, i) => {
        const distance = 1 - cosineSimilarity(embedding, centroid);
        if (distance < minDistance) {
          minDistance = distance;
          closestCluster = i;
        }
      });

      clusters[closestCluster].push(embedding);
    });

    // Update centroids
    let changed = false;
    clusters.forEach((cluster, i) => {
      if (cluster.length > 0) {
        const newCentroid = cluster[0].map((_, dim) =>
          cluster.reduce((sum, emb) => sum + emb[dim], 0) / cluster.length
        );

        if (cosineSimilarity(centroids[i], newCentroid) < 0.9999) {
          changed = true;
        }

        centroids[i] = newCentroid;
      }
    });

    if (!changed) break;
  }

  // Build final clusters
  const finalClusters: Cluster[] = centroids.map((centroid, i) => ({
    centroid,
    documents: embeddings.filter(emb =>
      cosineSimilarity(emb, centroid) === Math.max(
        ...centroids.map(c => cosineSimilarity(emb, c))
      )
    )
  }));

  return finalClusters;
}

See templates/clustering.ts for complete implementation with examples.

Use Cases

  1. Content Organization: Automatically group similar articles, products, or documents
  2. Duplicate Detection: Find near-duplicate content
  3. Topic Modeling: Discover themes in large text corpora
  4. Customer Feedback Analysis: Group similar customer complaints or feature requests

9. Error Handling

Common Errors

1. API Key Missing or Invalid

// ❌ Error: API key not set
const ai = new GoogleGenAI({});

// ✅ Correct
const ai = new GoogleGenAI({ apiKey: process.env.GEMINI_API_KEY });

if (!process.env.GEMINI_API_KEY) {
  throw new Error('GEMINI_API_KEY environment variable not set');
}

2. Dimension Mismatch

// ❌ Error: Embedding has 3072 dims, Vectorize expects 768
const embedding = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: text
  // No outputDimensionality specified → defaults to 3072
});

await env.VECTORIZE.insert([{
  id: '1',
  values: embedding.embedding.values // 3072 dims, but index is 768!
}]);

// ✅ Correct: Match dimensions
const embedding = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: text,
  config: { outputDimensionality: 768 } // ← Match index dimensions
});

3. Rate Limiting

// ❌ Error: 429 Too Many Requests
for (let i = 0; i < 1000; i++) {
  await ai.models.embedContent({ /* ... */ }); // Exceeds 100 RPM on free tier
}

// ✅ Correct: Implement rate limiting
async function embedWithRetry(text: string, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      return await ai.models.embedContent({
        model: 'gemini-embedding-001',
        content: text,
        config: { taskType: 'SEMANTIC_SIMILARITY' }
      });
    } catch (error: any) {
      if (error.status === 429 && attempt < maxRetries - 1) {
        const delay = Math.pow(2, attempt) * 1000; // Exponential backoff
        await new Promise(resolve => setTimeout(resolve, delay));
        continue;
      }
      throw error;
    }
  }
}

See references/top-errors.md for all 8 documented errors with detailed solutions.


10. Best Practices

Always Do

Specify Task Type

// Task type optimizes embeddings for your use case
const embedding = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: text,
  config: { taskType: 'RETRIEVAL_QUERY' } // ← Always specify
});

Match Dimensions with Vectorize

// Ensure embeddings match your Vectorize index dimensions
const embedding = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: text,
  config: { outputDimensionality: 768 } // ← Match index
});

Implement Rate Limiting

// Use exponential backoff for 429 errors
async function embedWithBackoff(text: string) {
  // Implementation from Error Handling section
}

Cache Embeddings

// Cache embeddings to avoid redundant API calls
const cache = new Map<string, number[]>();

async function getCachedEmbedding(text: string): Promise<number[]> {
  if (cache.has(text)) {
    return cache.get(text)!;
  }

  const response = await ai.models.embedContent({
    model: 'gemini-embedding-001',
    content: text,
    config: { taskType: 'SEMANTIC_SIMILARITY' }
  });

  const embedding = response.embedding.values;
  cache.set(text, embedding);
  return embedding;
}

Use Batch API for Multiple Texts

// Single batch request vs multiple individual requests
const embeddings = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  contents: texts, // Array of texts
  config: { taskType: 'RETRIEVAL_DOCUMENT' }
});

Never Do

Don't Skip Task Type

// Reduces quality by 10-30%
const embedding = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: text
  // Missing taskType!
});

Don't Mix Different Dimensions

// Can't compare embeddings with different dimensions
const emb1 = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: text1,
  config: { outputDimensionality: 768 }
});

const emb2 = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: text2,
  config: { outputDimensionality: 1536 } // Different dimensions!
});

// ❌ Can't calculate similarity between different dimensions
const similarity = cosineSimilarity(emb1.embedding.values, emb2.embedding.values);

Don't Use Wrong Task Type for RAG

// Reduces search quality
const queryEmbedding = await ai.models.embedContent({
  model: 'gemini-embedding-001',
  content: query,
  config: { taskType: 'RETRIEVAL_DOCUMENT' } // Wrong! Should be RETRIEVAL_QUERY
});

Using Bundled Resources

Templates (templates/)

  • package.json - Package configuration with verified versions
  • basic-embeddings.ts - Single text embedding with SDK
  • embeddings-fetch.ts - Fetch-based for Cloudflare Workers
  • batch-embeddings.ts - Batch processing with rate limiting
  • rag-with-vectorize.ts - Complete RAG implementation with Vectorize
  • semantic-search.ts - Cosine similarity and top-K search
  • clustering.ts - K-means clustering implementation

References (references/)

  • model-comparison.md - Compare Gemini vs OpenAI vs Workers AI embeddings
  • vectorize-integration.md - Cloudflare Vectorize setup and patterns
  • rag-patterns.md - Complete RAG implementation strategies
  • dimension-guide.md - Choosing the right dimensions (768 vs 1536 vs 3072)
  • top-errors.md - 8 common errors and detailed solutions

Scripts (scripts/)

  • check-versions.sh - Verify @google/genai package version is current

Official Documentation


Related Skills

  • google-gemini-api - Main Gemini API for text/image generation
  • cloudflare-vectorize - Vector database for storing embeddings
  • cloudflare-workers-ai - Workers AI embeddings (BGE models)

Success Metrics

Token Savings: ~60% compared to manual implementation Errors Prevented: 8 documented errors with solutions Production Tested: ✅ Verified in RAG applications Package Version: @google/[email protected] Last Updated: 2025-10-25


License

MIT License - Free to use in personal and commercial projects.


Questions or Issues?

GitHub Repository

jezweb/claude-skills
Path: skills/google-gemini-embeddings
aiautomationclaude-codeclaude-skillscloudflarereact

Related Skills

sglang

Meta

SGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.

View skill

evaluating-llms-harness

Testing

This Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.

View skill

llamaguard

Other

LlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.

View skill

langchain

Meta

LangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.

View skill