Most automated security tools start by scanning for known vulnerability patterns. They look for reentrancy, unchecked returns, integer overflows — the standard checklist. This catches real issues, but it misses a category of bugs that only make sense in the context of what the system is supposed to do.
A function that sends ETH before updating state is a classic reentrancy risk. But whether it’s actually exploitable depends on the broader system: Is there a reentrancy guard? Does the calling contract have a fallback? What invariants does the protocol assume about balance consistency? Without architecture context, a scanner reports the pattern. With context, an auditor can assess the actual risk.
What architecture context means in practice
Before Guardix runs vulnerability detection, it builds a system map. This includes contract relationships, inheritance hierarchies, call graphs, storage layouts, and external dependencies. From this map, the pipeline extracts three categories of artifacts:
- Invariants — properties the system must maintain (e.g., total supply equals sum of balances)
- Assumptions — conditions the system relies on but doesn’t enforce (e.g., oracle prices are fresh within 1 hour)
- Decisions — design choices with trade-offs (e.g., using a timelock for admin rotation vs immediate execution)
These artifacts are first-class outputs of the audit, not just internal pipeline state. They appear in the dashboard alongside findings, giving reviewers the full picture.
// Invariant: total supply must equal sum of all balances
assert totalSupply == sum(balances[addr] for addr in holders)
// Assumption: oracle price feed is fresh within staleness window
require(block.timestamp - oracle.lastUpdate <= STALENESS_THRESHOLD)
// Decision: admin rotation uses 48h timelock
// Trade-off: slower response to compromised admin, but prevents
// instant hostile takeover How this changes findings
When vulnerability detection runs with architecture context, findings come with richer explanations. Instead of "unchecked external call" (which is a pattern), the finding can say "external call before balance update in withdraw(), which violates the balance-consistency invariant and enables reentrancy through the vault’s callback handler."
That’s a finding an engineer can act on immediately. They know the invariant, the attack path, and the system component involved. No re-reading the codebase to build context that the tool already had.
Architecture artifacts are versioned alongside findings. When you re-audit after fixes, the system map updates and you can see how the architecture evolved.
The broader point
Security analysis that starts from patterns alone is useful but limited. The most impactful vulnerabilities are often logic errors — violations of assumptions the developers had but never wrote down. By extracting those assumptions explicitly, the audit process surfaces risks that pure pattern matching would miss.