The complete scoring algorithm for HICS (H33 Independent Code Scoring). Every weight, threshold, and deduction rule is published here for public audit. The algorithm is the authority.
Final = (Crypto × 0.30) + (Vuln × 0.25) + (Data × 0.20) + (Ops × 0.15) + (Health × 0.10)
Each category is scored independently from 0 to 100. The final score is the weighted sum, rounded to the nearest integer. Grade thresholds:
| Grade | Score Range |
|---|---|
| A | 90 – 100 |
| B | 80 – 89 |
| C | 70 – 79 |
| D | 60 – 69 |
| F | 50 – 59 |
| F- | 0 – 49 |
Evaluates the codebase's cryptographic posture. Detects post-quantum vulnerable algorithms, classical cryptographic misuse, key management failures, and transport security issues.
Detects injection attacks, authentication bypass patterns, XSS, SSRF, and hardcoded credentials. AST-based: uses tree-sitter to distinguish real assignments from match arms and classifiers.
Evaluates PII handling, encryption at rest, GDPR/HIPAA compliance patterns, and browser-side data exposure.
Evaluates error handling, external service resilience, rate limiting, and observability.
.unwrap() (not .expect()) in non-test, non-startup code. Grouped by file.Evaluates test coverage, CI/CD, code complexity, and project hygiene. Advisory — these findings are not security-critical.
#[cfg(test)] modules)Every finding carries a confidence score between 0.0 and 1.0. The actual deduction is:
deduction = base_deduction × confidence
A finding with base deduction 8.0 and confidence 0.60 deducts 4.8 points, not 8.0. This eliminates the binary pass/fail problem. Low-confidence findings (pattern matches in ambiguous contexts) are automatically softened.
Shannon entropy determines confidence for hardcoded secret detection. High entropy (>4.5) = likely real secret (confidence 0.95). Low entropy (<3.0) = likely placeholder (confidence 0.30).
Each finding type has a maximum total deduction (density cap) to prevent a single issue pattern from overwhelming the score:
| Finding Type | Cap (pts) |
|---|---|
| Crypto: PQ-vulnerable key exchange | 15 – 22 |
| Crypto: Weak hash/cipher | 9 – 20 |
| Vuln: SQL injection | 24 |
| Vuln: Command injection | 18 |
| Ops: No error handling | 9 |
| Ops: No timeout | 12 |
| Ops: Panic on input | 8 |
| Health: High complexity | 5 |
| Health: Long function | 4 |
| Health: Large file | 3 |
| No cap: Hardcoded secrets, JWT none, SSN, credit card, CVV, plaintext password | Unlimited |
Findings in test code receive 25% of production weight. A finding that deducts 8.0 in production code deducts 2.0 in test code. Test code is identified by:
#[cfg(test)] in Rust filesCertain findings force the entire category to 0/100:
Post-quantum cryptographic usage earns positive credits, capped at +15 per category:
| Detection | Credit |
|---|---|
| Kyber / ML-KEM usage | +4.0 |
| Dilithium / ML-DSA usage | +4.0 |
| FALCON usage | +3.0 |
| SPHINCS+ / SLH-DSA usage | +2.0 |
The following directories are excluded from scanning (not core application code):
node_modules, vendor, target, dist, build, docs, blog, public, k8s, deploy, helm, terraform, coverage, examples, benches, benchmarks, programs, contracts, migrations, formal, fuzz, sdk, wasm-verifier
HICS uses tree-sitter AST parsing for Rust, Python, JavaScript, and TypeScript. This replaces string matching with structural analysis:
#[cfg(test)] attribute nodes to split test vs production code.Files without AST support fall back to line-by-line pattern matching with reduced confidence.
The scoring results are sealed with:
H33 may update the scoring algorithm at any time. Substantive changes (weight modifications, new finding types, threshold changes) increment the algorithm version number and are documented on this page. Historical scores reflect the algorithm version active at time of scan. Scores are not retroactively updated.
This methodology is open for public audit. The formula, weights, thresholds, and finding type definitions are published here in full. The implementation (AST scanners, STARK proof generation, Dilithium signing) is proprietary. The methodology is transparent. The technology is licensed. Anyone can verify the math. The algorithm is the authority.