oracle-dev
について
oracle-devスキルは、開発者がËtrid向けのAI拡張オラクルを構築するためのもので、機械学習機能を備えたデータ認証を可能にします。このスキルは、オラクルパレットのスキャフォールディングの提供、異常検知MLモデルの統合、オフチェーンデータフィードのためのAPIアダプターの構築を支援します。インテリジェントなデータ検証と外部データソースの統合を必要とするRust/Pythonオラクルを作成する際にご利用ください。
クイックインストール
Claude Code
推奨/plugin add https://github.com/EojEdred/Etridgit clone https://github.com/EojEdred/Etrid.git ~/.claude/skills/oracle-devこのコマンドをClaude Codeにコピー&ペーストしてスキルをインストールします
ドキュメント
GitHub リポジトリ
関連スキル
when-debugging-ml-training-use-ml-training-debugger
その他This skill helps developers diagnose and fix common machine learning training issues like loss divergence, overfitting, and slow convergence. It provides systematic debugging to identify root causes and delivers fixed code with optimization recommendations. Use it when facing training problems like NaN losses, poor validation performance, or when training fails to converge properly.
when-developing-ml-models-use-ml-expert
その他This Claude Skill provides a specialized workflow for developing, training, and deploying machine learning models. It supports various architectures like CNNs and RNNs, handles the full lifecycle from training to production, and outputs trained models, metrics, and deployment packages. Use it when you need to build a new ML model, require model training, or are preparing for a production deployment.
subagent-driven-development
開発This skill executes implementation plans by dispatching a fresh subagent for each independent task, with code review between tasks. It enables fast iteration while maintaining quality gates through this review process. Use it when working on mostly independent tasks within the same session to ensure continuous progress with built-in quality checks.
huggingface-accelerate
開発HuggingFace Accelerate provides the simplest API for adding distributed training to PyTorch scripts with just 4 lines of code. It offers a unified interface for multiple distributed training frameworks like DeepSpeed, FSDP, and DDP while handling automatic device placement and mixed precision. This makes it ideal for developers who want to quickly scale their PyTorch training across multiple GPUs or nodes without complex configuration.
