oracle-dev
About
The oracle-dev skill helps developers build AI-enhanced oracles for Ëtrid, enabling data attestation with machine learning capabilities. It provides scaffolding for oracle pallets, integrates anomaly detection ML models, and builds API adapters for off-chain data feeds. Use this skill when creating Rust/Python oracles that require intelligent data validation and external data source integration.
Quick Install
Claude Code
Recommended/plugin add https://github.com/EojEdred/Etridgit clone https://github.com/EojEdred/Etrid.git ~/.claude/skills/oracle-devCopy and paste this command in Claude Code to install this skill
Documentation
GitHub Repository
Related Skills
when-debugging-ml-training-use-ml-training-debugger
OtherThis skill helps developers diagnose and fix common machine learning training issues like loss divergence, overfitting, and slow convergence. It provides systematic debugging to identify root causes and delivers fixed code with optimization recommendations. Use it when facing training problems like NaN losses, poor validation performance, or when training fails to converge properly.
when-developing-ml-models-use-ml-expert
OtherThis Claude Skill provides a specialized workflow for developing, training, and deploying machine learning models. It supports various architectures like CNNs and RNNs, handles the full lifecycle from training to production, and outputs trained models, metrics, and deployment packages. Use it when you need to build a new ML model, require model training, or are preparing for a production deployment.
subagent-driven-development
DevelopmentThis skill executes implementation plans by dispatching a fresh subagent for each independent task, with code review between tasks. It enables fast iteration while maintaining quality gates through this review process. Use it when working on mostly independent tasks within the same session to ensure continuous progress with built-in quality checks.
huggingface-accelerate
DevelopmentHuggingFace Accelerate provides the simplest API for adding distributed training to PyTorch scripts with just 4 lines of code. It offers a unified interface for multiple distributed training frameworks like DeepSpeed, FSDP, and DDP while handling automatic device placement and mixed precision. This makes it ideal for developers who want to quickly scale their PyTorch training across multiple GPUs or nodes without complex configuration.
