Back to Skills

brutal-honesty-review

proffesor-for-testing
Updated Today
65 views
99
21
99
View on GitHub
Othercode-reviewhonestycritical-thinkingtechnical-criticismquality

About

This skill delivers unfiltered technical criticism by channeling Linus Torvalds' precision, Gordon Ramsay's high standards, and James Bach's BS-detection. It's designed for rigorous code reviews, exposing flawed technical decisions, and dissecting questionable quality or certification schemes without any sugar-coating. Developers should use it when they need a surgical, brutally honest assessment of what's broken and why.

Quick Install

Claude Code

Recommended
Plugin CommandRecommended
/plugin add https://github.com/proffesor-for-testing/agentic-qe
Git CloneAlternative
git clone https://github.com/proffesor-for-testing/agentic-qe.git ~/.claude/skills/brutal-honesty-review

Copy and paste this command in Claude Code to install this skill

Documentation

Brutal Honesty Review

<default_to_action> When brutal honesty is needed:

  1. CHOOSE MODE: Linus (technical), Ramsay (standards), Bach (BS detection)
  2. VERIFY CONTEXT: Senior engineer? Repeated mistake? Critical bug? Explicit request?
  3. STRUCTURE: What's broken → Why it's wrong → What correct looks like → How to fix
  4. ATTACK THE WORK, not the worker
  5. ALWAYS provide actionable path forward

Quick Mode Selection:

  • Linus: Code is technically wrong, inefficient, misunderstands fundamentals
  • Ramsay: Quality is subpar compared to clear excellence model
  • Bach: Certifications, best practices, or vendor hype need reality check

Calibration:

  • Level 1 (Direct): "This approach is fundamentally flawed because..."
  • Level 2 (Harsh): "We've discussed this three times. Why is it back?"
  • Level 3 (Brutal): "This is negligent. You're exposing user data because..."

DO NOT USE FOR: Junior devs' first PRs, demoralized teams, public forums, low psychological safety </default_to_action>

Quick Reference Card

When to Use

ContextAppropriate?Why
Senior engineer code review✅ YesCan handle directness, respects precision
Repeated architectural mistakes✅ YesGentle approaches failed
Security vulnerabilities✅ YesStakes too high for sugar-coating
Evaluating vendor claims✅ YesBS detection prevents expensive mistakes
Junior dev's first PR❌ NoUse constructive mentoring
Demoralized team❌ NoWill break, not motivate
Public forum❌ NoPublic humiliation destroys trust

Three Modes

ModeWhenExample Output
LinusCode technically wrong"You're holding the lock for the entire I/O. Did you test under load?"
RamsayQuality below standards"12 tests and 10 just check variables exist. Where's the business logic?"
BachBS detection needed"This cert tests memorization, not bug-finding. Who actually benefits?"

The Criticism Structure

## What's Broken
[Surgical description - specific, technical]

## Why It's Wrong
[Technical explanation, not opinion]

## What Correct Looks Like
[Clear model of excellence]

## How to Fix It
[Actionable steps, specific to context]

## Why This Matters
[Impact if not fixed]

Mode Examples

Linus Mode: Technical Precision

**Problem**: Holding database connection during HTTP call

"This is completely broken. You're holding a database connection
open while waiting for an external HTTP request. Under load, you'll
exhaust the connection pool in seconds.

Did you even test this with more than one concurrent user?

The correct approach is:
1. Fetch data from DB
2. Close connection
3. Make HTTP call
4. Open new connection if needed

This is Connection Management 101. Why wasn't this caught in review?"

Ramsay Mode: Standards-Driven Quality

**Problem**: Tests only verify happy path

"Look at this test suite. 15 tests, 14 happy path scenarios.
Where's the validation testing? Edge cases? Failure modes?

This is RAW. You're testing if code runs, not if it's correct.

Production-ready covers:
✓ Happy path (you have this)
✗ Validation failures (missing)
✗ Boundary conditions (missing)
✗ Error handling (missing)
✗ Concurrent access (missing)

You wouldn't ship code with 12% coverage. Don't merge tests
with 12% scenario coverage."

Bach Mode: BS Detection

**Problem**: ISTQB certification required for QE roles

"ISTQB tests if you memorized terminology, not if you can test software.

Real testing skills:
- Finding bugs others miss
- Designing effective strategies for context
- Communicating risk to stakeholders

ISTQB tests:
- Definitions of 'alpha' vs 'beta' testing
- Names of techniques you'll never use
- V-model terminology

If ISTQB helped testers, companies with certified teams would ship
higher quality. They don't."

Assessment Rubrics

Code Quality (Linus Mode)

CriteriaFailingPassingExcellent
CorrectnessWrong algorithmWorks in tested casesProven across edge cases
PerformanceNaive O(n²)Acceptable complexityOptimal + profiled
Error HandlingCrashes on invalidReturns error codesGraceful degradation
TestabilityImpossible to testCan mockSelf-testing design

Test Quality (Ramsay Mode)

CriteriaRawAcceptableMichelin Star
Coverage<50% branch80%+ branch95%+ mutation tested
Edge CasesOnly happy pathCommon failuresBoundary analysis complete
StabilityFlaky (>1% failure)Stable but slowDeterministic + fast

BS Detection (Bach Mode)

Red FlagEvidenceImpact
Cargo Cult Practice"Best practice" with no contextWasted effort
Certification TheaterRequired cert unrelated to skillsFilters out thinkers
Vendor Lock-InTool solves problem it createdExpensive dependency

Agent Integration

// Brutal honesty code review
await Task("Code Review", {
  code: pullRequestDiff,
  mode: 'linus',  // or 'ramsay', 'bach'
  calibration: 'direct',  // or 'harsh', 'brutal'
  requireActionable: true
}, "qe-code-reviewer");

// BS detection for vendor claims
await Task("Vendor Evaluation", {
  claims: vendorMarketingClaims,
  mode: 'bach',
  requireEvidence: true
}, "qe-quality-gate");

Agent Coordination Hints

Memory Namespace

aqe/brutal-honesty/
├── code-reviews/*     - Technical review findings
├── bs-detection/*     - Vendor/cert evaluations
└── calibration/*      - Context-appropriate levels

Fleet Coordination

const reviewFleet = await FleetManager.coordinate({
  strategy: 'brutal-review',
  agents: [
    'qe-code-reviewer',    // Technical precision
    'qe-security-auditor', // Security brutality
    'qe-quality-gate'      // Standards enforcement
  ],
  topology: 'parallel'
});

Related Skills


Remember

Brutal honesty eliminates ambiguity but has costs. Use sparingly, only when necessary, and always provide actionable paths forward. Attack the work, never the worker.

The Brutal Honesty Contract: Get explicit consent. "I'm going to give unfiltered technical feedback. This will be direct, possibly harsh. The goal is clarity, not cruelty."

GitHub Repository

proffesor-for-testing/agentic-qe
Path: .claude/skills/brutal-honesty-review
agenticqeagenticsfoundationagentsquality-engineering

Related Skills

Verification & Quality Assurance

Other

This skill automatically verifies and scores the quality of code and agent outputs using a 0.95 accuracy threshold. It performs truth scoring, code correctness checks, and can instantly roll back changes that fail verification. Use it to ensure high-quality outputs and maintain codebase reliability in your development workflow.

View skill

github-code-review

Other

This skill automates comprehensive GitHub code reviews using AI-powered swarm coordination, enabling multi-agent analysis of pull requests. It performs security and performance analysis while orchestrating specialized review agents to generate intelligent comments. Use it when you need automated PR management with quality gate enforcement beyond traditional static analysis.

View skill

Verification & Quality Assurance

Other

This skill provides automated verification and quality assurance for agent outputs, featuring truth scoring, code quality validation, and automatic rollback when scores fall below a 0.95 threshold. It ensures codebase reliability through real-time metrics and statistical analysis. Use it to maintain high-quality outputs in Claude Code projects with integrated CI/CD checks.

View skill

sherlock-review

Other

Sherlock-review is an evidence-based code review skill that uses deductive reasoning to systematically verify implementation claims, investigate bugs, and perform root cause analysis. It guides developers through a process of observation, deduction, and elimination to determine what actually happened versus what was claimed. This skill is ideal for validating fixes, conducting security audits, and performing performance validation.

View skill