BenchmarksStack Ranking
APIsPricingDocsWhite PaperTokenBlogAboutSecurity Demo
Log InGet API Key
Standards · 10 min read

The HICI Formula:
How We Score Software Without Seeing the Code

The complete HICI (H33 Independent Code Index) scoring formula — 6 weighted dimensions, open-source evaluation circuits, STARK-attested, Dilithium-signed. Why we built it, how it works, and why we open-sourced it.

6
Dimensions
0
Lines of Code Exposed
3-Key
Signature (PQ)
Apache 2.0
License

Every enterprise procurement team faces the same problem: how do you evaluate a vendor’s software quality before you buy it? The traditional answer is code review. The real answer is zero-knowledge evaluation. HICI makes it possible — and we’re publishing the formula because a standard isn’t a standard if only one company controls it.

Table of Contents

  1. Why We Built HICI
  2. The Formula
  3. The Security Cap
  4. What Each Dimension Measures
  5. The Grade Scale
  6. Why It’s Zero-Knowledge
  7. Why We Open-Sourced It
  8. The Relationship to HATS
  9. Get Your HICI Score
1

Why We Built HICI

Every enterprise procurement team faces the same problem: how do you evaluate a vendor’s software quality before you buy it?

The traditional answer is code review. Send your engineers into the vendor’s codebase for two weeks. Cost: $50K–$100K. Timeline: weeks. And it requires the vendor to hand over their source code — their most valuable intellectual property — to a potential customer who might not buy.

Most vendors refuse. Most buyers skip the review. Both sides lose.

HICI solves this by making code evaluation zero-knowledge. The vendor runs the evaluation locally. The code never leaves their machine. The output is a cryptographic proof that the evaluation ran correctly and a scored grade across six dimensions. The buyer sees the grade. The vendor keeps their code. The math proves it’s honest.

Why publish the formula? A scoring standard controlled by a single company isn’t a standard — it’s a product feature. We want HICI to be the S&P rating for software. That requires transparency. HICI is Apache 2.0. Anyone can audit it, fork it, or build on it.
2

The Formula

Six dimensions. Six weights. One score.

HICI = Σ(wᵢ × Dᵢ) for i ∈ {1..6}
Dimension Symbol Weight Why This Weight
Code Quality D₁ 0.20 The foundation. Poor code quality compounds into every other dimension.
Security Posture D₂ 0.25 The highest weight. Security failures don’t degrade gracefully — they cascade. A single unpatched CVE or exposed secret can invalidate everything else.
Architecture D₃ 0.15 Good architecture makes everything else easier. Bad architecture makes everything else harder. But it’s fixable with effort.
Performance D₄ 0.15 Performance problems are real but bounded. A slow system is still a working system.
Compliance D₅ 0.15 Regulatory alignment matters for enterprise buyers. License compatibility, PII handling, audit trails.
Maintenance Risk D₆ 0.10 The lowest weight because it measures future risk, not current state. Important for long-term procurement decisions but shouldn’t dominate the score.

The full computation:

HICI_score = 0.20(D₁) + 0.25(D₂) + 0.15(D₃) + 0.15(D₄) + 0.15(D₅) + 0.10(D₆)

Each dimension produces a score from 0 to 100. The weighted sum produces the final HICI score. The weights sum to 1.00. The output is a single number on a 0–100 scale that maps to a letter grade.

3

The Security Cap

There’s one override rule: Security Posture (D₂) can cap the entire score.

Security Override Rules

Why? Because a codebase with 95/100 on everything except security is not a good codebase. It’s a well-architected, high-performance, compliant liability. Security is the one dimension where “good enough everywhere else” doesn’t compensate.

Consider a vendor whose code is beautifully structured, thoroughly tested, well-documented, and lightning fast — but ships with hardcoded API keys, three unpatched critical CVEs, and no input validation on user-facing endpoints. That software is a breach waiting to happen. The security cap ensures the HICI score reflects reality: no amount of architectural elegance compensates for an open front door.

4

What Each Dimension Measures

Each dimension runs as a deterministic evaluation circuit. Same code, same score, every time. The circuits are open-source and hash-pinned — you can verify that the circuit that produced a score is the same circuit published in the HICI repository.

D₁ — Code Quality (w = 0.20)

The foundation dimension. Measures the baseline health of the codebase through static analysis metrics:

These are static analysis metrics. The circuit runs deterministically — same code, same score, every time.

D₂ — Security Posture (w = 0.25)

The heaviest dimension. Security failures cascade in ways that other problems don’t:

The encryption currency sub-metric is where H33’s expertise is unique. We know which algorithms survive quantum and which don’t — because we built the replacements.

D₃ — Architecture (w = 0.15)

Measures structural quality — the decisions that are expensive to change later:

D₄ — Performance (w = 0.15)

Measures efficiency characteristics that affect production behavior:

D₅ — Compliance (w = 0.15)

Measures regulatory readiness — critical for enterprise procurement decisions:

D₆ — Maintenance Risk (w = 0.10)

Measures long-term sustainability — the lowest weight because it measures future risk, not current state:

5

The Grade Scale

The weighted score maps to a letter grade that procurement teams can compare across vendors:

Score Grade What It Means
95–100 A+ Exceptional — best-in-class across all dimensions
90–94 A Excellent — strong in every dimension, no significant gaps
85–89 A- Strong — minor items noted, none blocking
80–84 B+ Good — production-ready with noted items
75–79 B Acceptable — meets requirements with room for improvement
70–74 B- Adequate — meets minimums, improvement recommended
65–69 C+ Below average — notable gaps in multiple dimensions
60–64 C Marginal — significant concerns for enterprise deployment
55–59 C- Poor — substantial remediation required
50–54 D Failing — critical issues across multiple dimensions
0–49 F Failed — not suitable for production deployment
Note on the security cap: A codebase can score 92 on the weighted formula and still receive a C+ if its Security Posture dimension falls below 70. The cap overrides the formula. This is intentional — security is the one axis where “great everywhere else” is not sufficient.
6

Why It’s Zero-Knowledge

The HICI CLI runs entirely on the vendor’s machine. The code never leaves. No cloud upload, no API call that transmits source code, no temporary storage on third-party infrastructure. The evaluation happens locally. What gets transmitted to the buyer is three things:

  1. Merkle root — a single SHA3-256 hash committing to the exact repository state. Change one line of code and the hash changes. But the hash reveals nothing about the code itself. It’s a commitment, not a disclosure.
  2. STARK proof — proves that the evaluation circuit (which is open-source and hash-pinned) ran correctly on the committed codebase. The proof is valid if and only if the circuit produced the claimed scores from the claimed codebase. SHA3-256 hash-based, no trusted setup, post-quantum secure.
  3. H33-3-Key signature — the grade is signed with three independent signature families: Ed25519 (elliptic curve), Dilithium (lattice-based), and FALCON (NTRU-based). Breaking the signature requires breaking elliptic curves AND lattice cryptography AND NTRU simultaneously. This is the same nested hybrid signature scheme used in H33’s production authentication pipeline.

The buyer receives: a grade, a proof, and a signature. Not code. Not snippets. Not metadata about file names, directory structures, or implementation details. Math.

What this means in practice

A vendor can prove their code scores 91/100 (A grade) without revealing a single line of source code. The buyer can verify the proof is valid — that the open-source evaluation circuit actually produced that score from the committed codebase — without any access to the code. The Merkle root pins the exact codebase version. The STARK proof pins the evaluation. The 3-Key signature pins the attestation. If any component is tampered with, the verification fails.

This is why HICI changes procurement. The old model forced a binary choice: either the vendor exposes their IP or the buyer skips due diligence. HICI creates a third option where both sides get what they need. The vendor keeps their code private. The buyer gets a cryptographically verified quality assessment. The math eliminates the trust problem.

7

Why We Open-Sourced It

A scoring standard controlled by a single company isn’t a standard — it’s a product feature. We want HICI to be the S&P rating for software. That requires transparency.

The formula is published. The evaluation circuits are open-source. The CLI is Apache 2.0. Anyone can:

What H33 provides on top of the open standard: hosted proof pages, STARK attestation infrastructure, 3-Key signing, and the ZK-Procure procurement platform. The standard is free. The infrastructure is a product.

The incentive structure

Open-sourcing the formula aligns incentives. Vendors trust the evaluation because they can read the code that evaluates them. Buyers trust the grade because the circuit is auditable and the proof is verifiable. H33 benefits because adoption of the standard drives demand for the attestation infrastructure. Everyone wins when the methodology is transparent.

8

The Relationship to HATS

HATS (H33 AI Trust Standard) and HICI serve different but complementary purposes. HATS is a publicly available technical conformance standard for continuous AI trustworthiness; certification under HATS provides independently verifiable evidence that a system satisfies the standard’s defined controls.

HICI evaluates whether the code itself is well-built.

They’re complementary. A system can be HATS-certified (controls are operating correctly) with a poor HICI score (the underlying code has technical debt). Or it can have a perfect HICI score (clean code) but fail HATS certification (controls aren’t properly instrumented).

Enterprise procurement teams should look at both. HICI tells you whether the software is well-engineered. HATS tells you whether it’s well-operated. Together, they give you the complete picture: code quality and operational trustworthiness, both cryptographically verified.

9

Get Your HICI Score

Run a HICI assessment through ZK-Procure. Your code stays on your machine. The evaluation circuit runs locally. The proof is generated locally. The only thing transmitted is the grade, the proof, and the signature.

The math speaks for itself.

Score Your Codebase

Run a HICI assessment through ZK-Procure. Zero-knowledge evaluation — your code never leaves your machine.

Start Assessment

Resources

Build With Post-Quantum Security

Enterprise-grade FHE, ZKP, and post-quantum cryptography. One API call. Sub-millisecond latency.

Get Free API Key → Read the Docs
Free tier · 10,000 API calls/month · No credit card required
Verify It Yourself