Architecture

Design Principles

SafeJunction is developed by the team behind Hashi and panDA, following the same core principles; prioritizing security and robustness, while aiming for permissionless and trustless systems that can adapt over time.
 
  1. Security in Depth
    1. What: Avoiding reliance on any single source of truth.
      • This principle is crucial for creating robust systems that are resilient to attacks and failures. By not depending on a single source of truth, the system can mitigate the risks associated with any single point of failure.
      Why: Risks in complex systems are hard to assess, especially when rapidly evolving.
      • Rapid evolution in technology, especially in blockchain, can introduce unforeseen vulnerabilities. By using multiple sources for verification, the system can reduce the likelihood of systemic risks and increase transparency in trust assumptions.
      How: Implementing additive security by cross-checking results with multiple sources.
      • Redundant verification through multiple proofs (e.g., zk-SNARKs, zk-STARKs, TEE attestations) and cross-checking results can ensure the integrity and security of computations. This approach leverages the strengths of different proof systems to provide a higher level of security.
       
  1. Future Proof-ness
    1. What: Designing long-lasting infrastructure components.
      • This is essential for ensuring the system remains relevant and secure over time, even as new threats and technologies emerge.
      Why: Tech obsolescence is dangerous.
      • Immutability of blockchain contracts means that outdated technology or vulnerabilities can persist, leading to potential exploits. Designing for future-proofing helps mitigate these risks.
      How: Designing a widely extensible and modular system.
      • Modularity and extensibility allow the system to integrate new technologies and replace outdated components without significant disruption. This approach ensures the system can evolve and adapt to new security challenges and technological advancements.
       
  1. No Vendor Lock-ins
    1. What: Avoiding reliance on a single vendor implementation.
      • Vendor lock-in can create significant risks if a single vendor's solution fails or becomes compromised. By maintaining flexibility in vendor choices, the system can switch to alternatives as needed.
      Why: Dependency on a single vendor can introduce significant risks.
      • If a critical component from a single vendor fails, the entire system can be compromised. Ensuring the ability to switch vendors reduces this risk.
      How: Heavy abstraction to avoid accommodating specific protocol needs.
      • Abstracting the architecture to find common denominators across protocols ensures that the system remains flexible and adaptable. This approach minimizes dependencies on specific implementations and facilitates easier integration of new solutions.
       

Key components

The following components are what we need to build in order to achieve a system adhering to the core principles explained above.
 
  • tRUST SDK - “Rust code you can trust”
    • Write/audit code once and execute it anywhere
    • Provides an abstraction layer for off-chain computation systems (i.e. zkVMs, teeVMs)
    • Supports multiple proving backends
    • Allows seamless switching between execution environments
    •  
  • SafeJunction Engine & On-chain interface
    • A unified interface to submit, index, and query verified statements
    • Integrates with proof-aggregation systems to ensure efficient on-chain verification
    •  
  • Off-chain Fact Explorer
    • Simplifies access and discovery of information
    • Provides visual navigation of the SafeJunction state tree and included facts (verified/proven statements)
    • Facilitates easy discovery and search of fact sets
    • Includes an API for querying and submitting data for integrations
    •  
notion image
 

Synergies and Placement with other building blocks

The proving ecosystem is rapidly evolving. The following image shows how SafeJunction provides different features that are not covered by existing players but that can interoperate nicely with them.
notion image

Execution Flow

 
End-to-end flow showing how an interaction with the system looks like
End-to-end flow showing how an interaction with the system looks like
 

Security implications

To implement all the core principles discussed, the SafeJunction engine is introduced as a crucial new component that handles the necessary orchestration logic.
As shown in the analysis below, the orchestration risk is mitigated by the redundant use of multiple proving systems to back its execution. This redundancy not only offsets the orchestration risk but also enhances the overall security of the system.
In addition to reducing risks, the system embodies our core principles: security in depth, future-proofness, and no vendor lock-ins, ultimately benefiting the entire ecosystem.
notion image
Assumptions
  • all proving system risks have been equally quantified to an arbitrary constant (1)
  • the orchestration risk overhead has been quantified as 0.75 since its complexity is not negligible but lower than the tech foundation used by most proving systems
 
 

Diving deeper in the Technical Architecture

The MasterTree

The growing dataset (similarly to this) of verified off-chain facts is stored in a tree data structure consisting of inner and outer proofs.
  • Inner: An off-chain verified proof to back a given included fact
    • With wide proof type support
  • Outer: An on-chain verifiable proof backing the tree correctness
    • Can be constructed using any proving backend, including: SP1, RiscZero, TEEs.
 

The iteration pipeline - executed every x blocks/minutes to produce the outer proofs

For each supported backend, the engine needs to run iterations separately. Given the execution is deterministic, the proof returned by each backend will be backing the same exact output.
Each iteration receives as inputs {previous_MT, new_facts_material} and works as follows:
  1. a new MT gets created starting from the previous_MT
    1. note that previous_MT correctness could either be verified here, or a better alternative could be to include its reference within the output
  1. each fact_material (:= {fact_commitment, fact_output, fact_inputs, proofs}) within new_facts_material gets:
    1. verified for correctness against each supporting proof
    2. included within the MT (in-place updated if pre-existing, or added if not) at a specific location so to enable efficient checking given key := {fact_commitment, fact_inputs}. The expected value must contain {fact_output, verified_proofs_type, ..)
  1. output := {previous_MT_commitment, MT_commitment}
  1. The output and its correctness proof get returned
 
notion image
 
How to access data efficiently and programmatically
 
notion image
 
  1. We use a Namespaced Merkle Tree: given a certain namespace (NS), you can derive a certain branch. All the leaves within this branch refer to facts (statements based on proofs of certain types) that are part of the same namespace.
  1. Users define a view for a given namespace NS by defining map and reduce functions in tRUST. Within the iteration pipeline, views for each namespace get processed as the very last step, and stored at a given location (i.e. view_namespace*: NS+hash(view)).
      • map/reduce := the map function gets executed on each leaf of the branch so that you can extract the statement+proofs of interest within the selected namespace, the reduce function produces a succinct output based on the map outputs. For instance:
      // example meta code // for a map/reduce view to infere redundant oracle agreements (Hashi) function map(fact) { fact_type, statement, proof = fact normalized_statement = null if (fact_type == "DendrETH_step") normalized_statement = normalize_1(statement.relevant_path_1) else if (fact_type == "Axiom") normalized_statement = normalize_2(statement.relevant_path_2) else if (fact_type == "Superproof") normalized_statement = normalize_3(statement.relevant_path_3) else return null return {type: fact_type, statement: normalized_statement} } function reduce(facts) { // counts distinct fact types if they all have a matching statement value matching_count = 0 value = facts[0].statement processed_fact_types = [] for (i=1; i<facts.length; i++){ if (facts[i] != value) return 0 // at least one fact mismatches, returning if (facts[i].type not in processed_fact_types) { // matching value + new type matching_count += 1 processed_fact_types += facts[i].type } } return matching_count }
  1. Facts can now be verified either individually or, more efficiently, reading* the view output only (likely in ~ constant time given the NMT and the branch @ * having just 1 leaf)
 

Integrate SafeJunction in a contract

 
  1. Write your off-chain code once and execute it using any proof system
    1. fn sum(a: i32, b: i32) -> i32 { // let's just sum two numbers off-chain a + b }
      tRUST code
 
  1. Create the manifest and store its reference on-chain
    1. interface definitions
      • binhash_N := codehash returned by tRUST_compile(tRUST_code_X, tRUST_target_Y)
      • binhash needs to be within the manifest and it’s the binary hash.
      • manifest := { author: ___, version: ____, binhashes: {target_1: binhash_1, ___}}
      • codehash := hash(manifest)
      { "version": 1, "binhashes": { {"target": "sp1_0.3", "binhash": BINHASH_1} } }
      manifest.json
 
  1. Call the off-chain code by providing the necessary parameters from the on-chain contract
    1. interface definitions
      uuid = sj_agent_execute/sj_query(codehash, input_args)
      • uuid commits to codehash / input_args
       
      sj_agent_read/sj_read(uuid, )
      • read() accesses the right NMT field on a verified iteration and returns its value
       
      function offchainSumQuery(uint a, uint b) public returns (bytes32) { bytes32 uuid = sj_query(CODEHASH_1, [a, b]); } function checkResult(uuid) public returns (uint) { try { return sj_read(uuint); } catch(..){ emit Log("Not available yet"); }
      Solidity code