ai-collaboration-standards
About
This skill ensures evidence-based AI responses by requiring explicit certainty tags and source citations when analyzing code or making recommendations. It prevents hallucinations by distinguishing confirmed facts from inferences and assumptions. Use it during code analysis, suggestion generation, or whenever confidence level needs clarification.
Documentation
AI Collaboration Standards
This skill ensures AI assistants provide accurate, evidence-based responses without hallucination.
Quick Reference
Certainty Tags
| Tag | Use When |
|---|---|
[Confirmed] / [已確認] | Direct evidence from code/docs |
[Inferred] / [推論] | Logical deduction from evidence |
[Assumption] / [假設] | Based on common patterns (needs verification) |
[Unknown] / [未知] | Information not available |
[Need Confirmation] / [待確認] | Requires user clarification |
Source Types
| Source Type | Tag | Reliability |
|---|---|---|
| Project Code | [Source: Code] | ⭐⭐⭐⭐⭐ Highest |
| Project Docs | [Source: Docs] | ⭐⭐⭐⭐ High |
| External Docs | [Source: External] | ⭐⭐⭐⭐ High |
| Web Search | [Source: Search] | ⭐⭐⭐ Medium |
| AI Knowledge | [Source: Knowledge] | ⭐⭐ Low |
| User Provided | [Source: User] | ⭐⭐⭐ Medium |
Core Rules
- Evidence-Based Only: Only analyze content that has been explicitly read
- Cite Sources: Include file path and line number for code references
- Classify Certainty: Tag all statements with certainty level
- Always Recommend: When presenting options, include a recommended choice with reasoning
Detailed Guidelines
For complete standards, see:
Examples
✅ Correct Response
[Confirmed] src/auth/service.ts:45 - JWT validation uses 'jsonwebtoken' library
[Inferred] Based on repository pattern in src/repositories/, likely using dependency injection
[Need Confirmation] Should the new feature support multi-tenancy?
❌ Incorrect Response
The system uses Redis for caching (code not reviewed)
The UserService should have an authenticate() method (API not verified)
✅ Correct Option Presentation
There are three options:
1. Redis caching
2. In-memory caching
3. File-based caching
**Recommended: Option 1 (Redis)**: Given the project already has Redis infrastructure
and needs cross-instance cache sharing, Redis is the most suitable choice.
❌ Incorrect Option Presentation
There are three options:
1. Redis caching
2. In-memory caching
3. File-based caching
Please choose one.
Checklist
Before making any statement:
- Source Verified - Have I read the actual file/document?
- Source Type Tagged - Did I specify
[Source: Code],[Source: External], etc.? - Reference Cited - Did I include file path and line number?
- Certainty Classified - Did I tag as
[Confirmed],[Inferred], etc.? - No Fabrication - Did I avoid inventing APIs, configs, or requirements?
- Recommendation Included - When presenting options, did I include a recommended choice?
Configuration Detection
This skill supports project-specific language configuration for certainty tags.
Detection Order
- Check
CONTRIBUTING.mdfor "Certainty Tag Language" section - If found, use the specified language (English / 中文)
- If not found, default to English tags
First-Time Setup
If no configuration found and context is unclear:
- Ask the user: "This project hasn't configured certainty tag language preference. Which would you like to use? (English / 中文)"
- After user selection, suggest documenting in
CONTRIBUTING.md:
## Certainty Tag Language
This project uses **[English / 中文]** certainty tags.
<!-- Options: English | 中文 -->
Configuration Example
In project's CONTRIBUTING.md:
## Certainty Tag Language
This project uses **English** certainty tags.
### Tag Reference
- [Confirmed] - Direct evidence from code/docs
- [Inferred] - Logical deduction from evidence
- [Assumption] - Based on common patterns
- [Unknown] - Information not available
- [Need Confirmation] - Requires user clarification
License: CC BY 4.0 | Source: universal-doc-standards
Quick Install
/plugin add https://github.com/AsiaOstrich/universal-dev-skills/tree/main/ai-collaboration-standardsCopy and paste this command in Claude Code to install this skill
GitHub 仓库
Related Skills
sglang
MetaSGLang is a high-performance LLM serving framework that specializes in fast, structured generation for JSON, regex, and agentic workflows using its RadixAttention prefix caching. It delivers significantly faster inference, especially for tasks with repeated prefixes, making it ideal for complex, structured outputs and multi-turn conversations. Choose SGLang over alternatives like vLLM when you need constrained decoding or are building applications with extensive prefix sharing.
evaluating-llms-harness
TestingThis Claude Skill runs the lm-evaluation-harness to benchmark LLMs across 60+ standardized academic tasks like MMLU and GSM8K. It's designed for developers to compare model quality, track training progress, or report academic results. The tool supports various backends including HuggingFace and vLLM models.
llamaguard
OtherLlamaGuard is Meta's 7-8B parameter model for moderating LLM inputs and outputs across six safety categories like violence and hate speech. It offers 94-95% accuracy and can be deployed using vLLM, Hugging Face, or Amazon SageMaker. Use this skill to easily integrate content filtering and safety guardrails into your AI applications.
langchain
MetaLangChain is a framework for building LLM applications using agents, chains, and RAG pipelines. It supports multiple LLM providers, offers 500+ integrations, and includes features like tool calling and memory management. Use it for rapid prototyping and deploying production systems like chatbots, autonomous agents, and question-answering services.
