User Guide: Smart Metrics System
Version: 2.1.0 Feature: Automatic Behavioral Effectiveness Tracking Token Cost: 0 (background collection, never injected)
π Table of Contents
- What Are Smart Metrics?
- What We Track
- Using the /metrics Command
- Understanding Metrics Data
- Privacy & Data Storage
- Optimization Based on Metrics
- Troubleshooting
- FAQ
What Are Smart Metrics?
Smart metrics automatically track how effectively Mini-CoderBrain works for you.
Think of metrics like:
- Fitness tracker for your development workflow
- Performance dashboard for AI behavior
- Analytics for productivity
- A/B testing for profile effectiveness
Key Features
β Zero Overhead: Collected in background, no user action required β Privacy-First: No sensitive content stored (only metadata) β Actionable: Use data to optimize your workflow β Profile Comparison: See which profiles work best β Tool Insights: Understand your tool usage patterns
What We Track
1. Tool Usage Metrics
What: Which tools used, how often
Examples:
Read: 15 uses
Edit: 10 uses
Bash: 8 uses
Grep: 6 uses
Glob: 4 uses
Why Useful:
- Identify most-used tools
- Spot inefficiencies (too many Bash commands?)
- Validate tool selection patterns
2. Session Metrics
What: Session characteristics and outcomes
Tracked:
- Duration (minutes)
- Operation count
- Active profile
- Tasks completed
- Errors encountered
Example:
{
"duration_minutes": 45,
"tool_uses": 47,
"profile": "default",
"tasks_completed": 3,
"errors": 1
}
Why Useful:
- Measure productivity
- Compare session effectiveness
- Track progress over time
3. Profile Performance
What: How profiles perform for you
Tracked:
- Token usage (actual vs estimated)
- Response characteristics
- Suggestion frequency
- User satisfaction indicators
Example:
focus profile:
- Avg duration: 32 mins
- Avg operations: 38/session
- Token usage: 148 (vs 150 estimated)
- User corrections: 0.2/session
Why Useful:
- Choose optimal profile for tasks
- Validate profile effectiveness
- Customize profiles based on data
4. Behavioral Quality (Future)
Planned for v2.2:
- Context check compliance
- Banned questions counter
- Proactive suggestion quality
What We DONβT Track
β Code content (no file contents stored) β File names (only tool usage counts) β Commit messages (no git data) β Personal information (no user data) β Conversation content (only metadata)
Privacy-First Design: Only behavioral metadata collected
Using the /metrics Command
Basic Usage
View Current Session
/metrics
Output:
π METRICS - Current Session (45 mins, default profile)
Tool Usage:
Read: 12 | Edit: 8 | Bash: 4
Grep: 5 | Glob: 3 | Write: 2
Behavioral Quality:
β
Metrics collection active
π Full metrics available at session end
---
Metrics file: sessionstart-1697123456.json
When to Use: Check progress during long session
View Weekly Summary
/metrics --weekly
Output:
π METRICS - Last 7 Days (8 sessions)
Profile Usage:
default: 5 sessions (62.5%)
focus: 2 sessions (25%)
research: 1 session (12.5%)
Top Tools:
1. Read: 98 uses
2. Edit: 67 uses
3. Bash: 45 uses
4. Grep: 34 uses
5. Glob: 23 uses
Session Outcomes (Avg):
Duration: 38 mins
Tasks: 2.4/session
Errors: 0.6/session
---
Data from 8 sessions
When to Use: Weekly workflow review
Compare Profiles
/metrics --profile
Output:
π METRICS - Profile Comparison (Last 30 days)
Profile: default (15 sessions)
Avg Operations: 47/session
Avg Duration: 45 mins
Profile: focus (8 sessions)
Avg Operations: 38/session
Avg Duration: 32 mins
Profile: research (4 sessions)
Avg Operations: 52/session
Avg Duration: 62 mins
---
Compare profiles to optimize workflow
When to Use: Deciding which profile to use
Full Report
/metrics --all
Output: Combined view of session + weekly + profile data
When to Use: Comprehensive analysis
Understanding Metrics Data
Tool Usage Insights
High Read Count
Read: 50 | Edit: 10
Interpretation: Good balance, reading before editing Action: None needed, healthy pattern
High Write Count
Write: 20 | Edit: 5
Interpretation: Creating many new files Potential Issue: Should some be edits instead? Action: Review if refactoring existing code vs writing new
High Bash Count
Bash: 30 | Read: 10
Interpretation: Many terminal commands Potential Issue: Using Bash instead of specialized tools? Action: Review tool-selection-rules pattern
Session Duration Patterns
Short Sessions (< 20 mins)
Possible Reasons:
- Quick bug fixes
- Focused tasks (good!)
- Context issues (investigate)
Long Sessions (> 90 mins)
Possible Reasons:
- Complex features (normal)
- Flow state (great!)
- Potential: Consider breaks
Profile Effectiveness
Metric: Operations Per Session
focus: 38 ops/session
default: 47 ops/session
research: 52 ops/session
Interpretation:
- Focus: Terse output = fewer operations needed
- Default: Balanced
- Research: Verbose = more operations (but deeper work)
Not a competition: Different profiles for different goals!
Metric: Session Duration
focus: 32 mins avg
default: 45 mins avg
research: 62 mins avg
Interpretation:
- Focus: Faster execution (goal achieved!)
- Research: Longer exploration (by design)
- Default: Balanced middle ground
Match to task: Choose profile based on time available
Privacy & Data Storage
Whatβs Stored
Location: .claude/metrics/sessions/*.json
Format: JSON files, one per session
Size: ~1-2 KB per session
Retention: 30 days (auto-archived)
Example Structure:
{
"session_id": "sessionstart-1697123456",
"timestamp": "2025-10-15T14:30:00Z",
"profile": "default",
"metrics": {
"tool_usage": {
"Read": 12,
"Edit": 8
},
"session_outcome": {
"duration_minutes": 45,
"tool_uses": 47
}
}
}
No Sensitive Data: Only tool names and counts, no file paths or content
Data Management
View Stored Metrics
ls .claude/metrics/sessions/
Delete Metrics Manually
rm .claude/metrics/sessions/*.json
Auto-Cleanup
- Runs at session end
- Archives files > 30 days old
- Moved to
.claude/metrics/archive/YYYY-MM/
- Never deletes (archives for history)
Privacy Guarantees
β Local Only: Never sent to external services β No Code: File contents never stored β No Paths: File names not tracked β Metadata Only: Tool usage counts only β User Control: Delete anytime
Optimization Based on Metrics
Scenario 1: Too Many Bash Commands
Metrics Show:
Bash: 40 | Read: 10 | Edit: 8
Analysis: Using Bash excessively
Optimization:
- Review last session: What Bash commands?
- Check tool-selection-rules pattern
- Use specialized tools:
cat file.txt
β Use Read toolgrep "pattern"
β Use Grep toolfind . -name
β Use Glob tool
Expected Result: Bash count drops, Read/Grep/Glob increase
Scenario 2: Profile Mismatch
Metrics Show:
default profile:
- Duration: 65 mins (long!)
- Operations: 85 (high!)
- Task completed: 1 (low!)
Analysis: Default might be too verbose for this work
Optimization:
- Try focus profile for this task type
- Compare metrics after switch
- Stick with faster profile if better
Expected Result: Shorter duration, fewer ops, more tasks
Scenario 3: Low Productivity
Metrics Show:
Weekly:
- 10 sessions
- Avg tasks: 0.8/session (low!)
- Avg errors: 2.3/session (high!)
Analysis: Somethingβs not working well
Optimization:
- Check error patterns (Git issues? Build failures?)
- Review context quality:
/validate-context
- Consider profile change
- Check if project structure detection worked
Expected Result: Task completion improves, errors decrease
Troubleshooting
Problem: No Metrics Data
Symptom: /metrics
shows βNo session data availableβ
Causes:
- First session (no data yet)
- Metrics collection not started
- jq not installed
Solutions:
# Check metrics folder exists
ls .claude/metrics/sessions/
# Check if jq installed
which jq
# Install jq if missing
# Ubuntu/Debian:
sudo apt-get install jq
# macOS:
brew install jq
# Restart Claude Code
Problem: Metrics File Not Created
Symptom: Session completed but no metrics file
Causes:
- Session too short (< 1 min)
- Metrics collector script not executable
- Session ended abnormally
Solutions:
# Make metrics collector executable
chmod +x .claude/hooks/metrics-collector.sh
chmod +x .claude/hooks/metrics-report.sh
# Test manually
.claude/hooks/metrics-collector.sh init
# Check if file created
ls .claude/metrics/sessions/
Problem: Inaccurate Metrics
Symptom: Numbers donβt match expectations
Causes:
- Metrics collected at session end (incomplete mid-session)
- Multiple sessions overlapping
- Manual file operations not tracked
Solutions:
- Wait for session end for accurate totals
- Avoid running multiple Claude instances
- Metrics track AI operations, not manual edits
FAQ
Q: Do metrics slow down Claude?
A: No! Collection happens in background, zero performance impact.
Q: Can I disable metrics?
A: Yes! Remove or comment out metrics calls in hooks. But why? Zero overhead!
Q: Are metrics sent to external servers?
A: No! Everything stored locally in .claude/metrics/
.
Q: Can I export metrics?
A: Yes! Files are JSON, easy to parse and analyze externally.
Q: Do metrics affect token usage?
A: No! Collection is background only, zero token cost.
Q: How long are metrics kept?
A: 30 days active, then archived. Never deleted, always accessible.
Q: Can I see metrics for other team members?
A: No, metrics are per-machine. Each developer has their own.
Q: What if I delete metrics folder?
A: Metrics restart fresh. No impact on Mini-CoderBrain functionality.
π― Key Takeaways
- Automatic Collection: Zero user action required
- Privacy-First: No sensitive data, only metadata
- Tool Insights: Understand your workflow patterns
- Profile Comparison: Data-driven profile selection
- Optimization: Use metrics to improve productivity
- Local Storage: Everything stays on your machine
- Zero Cost: No token impact, no performance impact
π Related Documentation
- Behavior Profiles Guide - Compare profile performance with metrics
- Patterns Guide - Patterns define what metrics track
- Migration Guide - Metrics are new in v2.1
- Metrics System - Technical details
π‘ Pro Tips
Tip 1: Weekly Review Habit
# Every Monday morning:
/metrics --weekly
# Review patterns, adjust workflow
Tip 2: Profile Experimentation
# Try new profile for a week
behavior_profile: "focus"
# After week, compare:
/metrics --profile
# Data-driven decision!
Tip 3: Export for Analysis
# Export metrics to CSV
jq -r '.metrics.tool_usage | to_entries | .[] | [.key, .value] | @csv' \
.claude/metrics/sessions/*.json > tool_usage.csv
# Analyze in spreadsheet
Tip 4: Team Insights
Share aggregated (anonymized) metrics with team to identify best practices!
Data-driven development with zero overhead! π
Need help? Open an issue: GitHub Issues