Quality Signal
One number: 0–10,000
Section titled “One number: 0–10,000”sentrux computes a single continuous score called Quality Signal. It’s the geometric mean of 5 root cause metrics, each normalized to [0, 1], then scaled to 0–10,000.
quality_signal = (modularity × acyclicity × depth × equality × redundancy) ^ (1/5) × 10000Why geometric mean?
Section titled “Why geometric mean?”The Nash Social Welfare theorem (1950) proves the geometric mean is the unique aggregation satisfying:
- Pareto optimality — if all scores improve, signal improves
- Symmetry — all root causes weighted equally
- Independence — irrelevant dimensions don’t affect the result
Practically: you cannot game one metric while tanking another. The ONLY way to raise the geometric mean is to improve ALL factors. This forces genuine architectural improvement.
Why not letter grades?
Section titled “Why not letter grades?”Letter grades create artificial boundaries. Score 79 = B, score 81 = A. AI agents game the boundary instead of genuinely improving. With continuous scores, every +1 matters equally. The agent converges naturally when improvements become marginal — like gradient descent.
Why not 20 proxy metrics?
Section titled “Why not 20 proxy metrics?”Proxy metrics (coupling ratio, dead code %, function length) measure symptoms, not root causes. An AI agent can game individual proxies — add fake imports to boost cohesion, split functions superficially. The metrics improve but the code doesn’t.
Root cause metrics measure fundamental structural properties that can only improve through genuine architectural change.
The 5 root causes
Section titled “The 5 root causes”A codebase is a directed graph G = (V, E) where V = files and E = dependencies. This graph has exactly 5 independent structural properties:
| Metric | Theory | What it measures | Edge or Node? |
|---|---|---|---|
| Modularity | Newman 2004 | Do edges cluster into modules? | Edge |
| Acyclicity | Martin 2003 | Are there circular edges? | Edge |
| Depth | Lakos 1996 | How deep are edge chains? | Edge |
| Equality | Gini 1912 | Are node properties concentrated? | Node |
| Redundancy | Kolmogorov | Are there unnecessary nodes? | Node |
3 edge properties + 2 node properties = 5 total. Adding more would be redundant.
Normalization
Section titled “Normalization”modularity = (Q + 0.5) / 1.5 # bounded [-0.5, 1] → [0, 1]acyclicity = 1 / (1 + cycles) # unbounded → sigmoiddepth = 1 / (1 + max_depth / 8) # unbounded → sigmoid, midpoint 8equality = 1 - gini # bounded [0, 1] → invertredundancy = 1 - ratio # bounded [0, 1] → invert3 of 5 metrics have zero arbitrary parameters. Only 2 need a sigmoid midpoint.
Convergence
Section titled “Convergence”Iteration 1: signal = 5800 → equality is lowest (3500) → refactor god function → equality improves → signal = 6300
Iteration 2: signal = 6300 → modularity is lowest (5500) → extract module → modularity improves → signal = 6900
Iteration 3: signal = 6900 → redundancy is lowest (7200) → remove dead functions → redundancy improves → signal = 7400
...diminishing returns → natural convergenceNo artificial stopping point. The AI converges when marginal improvement per change approaches zero — exactly like gradient descent.
Theoretical foundation
Section titled “Theoretical foundation”| Theory | Year | What it provides |
|---|---|---|
| Cybernetics (Wiener) | 1948 | Feedback loop architecture |
| Systems Engineering (Tsien) | 1954 | Decomposition into independent subsystems |
| Kolmogorov Complexity | 1963 | Theoretical ground truth |
| Graph Modularity (Newman) | 2004 | Modularity Q metric |
| Nash Bargaining | 1950 | Geometric mean as optimal aggregation |
| Gini Coefficient | 1912 | Inequality measurement |
| Tarjan’s Algorithm | 1972 | Cycle detection |
| Lakos Levelization | 1996 | Dependency depth |