Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

edgesentry-audit

“Trust and verification for edge infrastructure.”

Why

In recent years, labor shortages have become a serious challenge in infrastructure operations. Labor-intensive industries such as construction are increasingly adopting IoT devices for remote inspections.

At the same time, if device spoofing, device takeover, or inspection data tampering occurs, trust in the entire system is fundamentally undermined. This makes continuous verification of both device authenticity and data integrity essential.

Vision and Principles

EdgeSentry-Audit is an early-stage learning project — we are building this to deepen our understanding of IoT security techniques hands-on. The license is commercially compatible (MIT/Apache 2.0), but the implementation is just getting started and is not yet production-ready. Following the governance model of successful “in-process” systems like DuckDB, we keep the core intellectual property open and vendor-neutral, so it can grow into a public good over time.

Our goal is to serve as the Common Trust Layer for vendors in public infrastructure, maritime (MPA), and smart buildings (BCA), helping them meet the highest regulatory standards — including Singapore’s CLS Level 3/4, iM8, and Japan’s Unified Government Standards.

We believe the infrastructure of trust should not be owned by a single private entity:

  • Open for All: A vendor-agnostic reference implementation that lowers the barrier for companies to achieve regulatory compliance.
  • Cross-Industry Learning: Engineers collaborate across corporate boundaries to master the complexities of global IoT security standards.
  • Sustainable Growth: The core remains a community-driven reference implementation; commercial services (advanced analytics, automated compliance reporting) are built on top of this stable foundation.

See the Roadmap for the phased compliance plan.

Initial Scope

For public-infrastructure IoT deployments, Singapore’s Cybersecurity Labelling Scheme (CLS) Level 3 and Level 4 introduce hardware-level security requirements. EdgeSentry-Audit is designed to support these requirements through hardware extensions — hardware security itself is implemented on the hardware side, with this library providing the software integration layer. The initial scope covers tamper prevention and tamper-evident audit records, with hardware-level extension points built in from the start.

How

Modeled after the “Simple, Portable, Fast” philosophy, EdgeSentry-Audit implements three pillars of trust in Rust, designed for high-performance embedding:

  1. Identity — Ed25519 digital signatures to guarantee the authenticity of both devices and data. Built with C/C++ FFI at its heart, allowing legacy industrial systems and robotics platforms to adopt secure identity without a full rewrite.

  2. Integrity — BLAKE3 hash chains to ensure data immutability. Provides a verifiable cryptographic record that can be validated locally or in the cloud, ensuring forensic readiness even in offline scenarios.

  3. Resilience — Store-and-forward offline buffering (OfflineBuffer with InMemoryBufferStore and SQLite via buffer-sqlite feature) is delivered in Phase 1, satisfying CLS-09. Intelligent data summarization for narrow-bandwidth environments (Phase 2 (planned)) will add priority queuing for limited links. See Roadmap.

edgesentry-audit is the crate name. The Rust library is imported as edgesentry_audit (underscores). It includes all audit record types, hashing, signature verification, chain verification, ingestion-time verification, deduplication, sequence validation, persistence workflow, and the CLI.

License

This project is licensed under either of:

At your option.

Roadmap

EdgeSentry-RS follows a phased approach: first establish the Singapore compliance baseline (CLS Level 2 → Level 3, SS 711:2025), then expand to Japan via GCLI mutual recognition (JC-STAR, Cyber Trust Mark), then achieve global convergence across EU, UK, and critical infrastructure markets. This mirrors the DuckDB model — build an embeddable OSS core that becomes a de facto standard through ecosystem adoption rather than lock-in.

Why Singapore First

Singapore’s CLS is directly derived from the European ETSI EN 303 645 standard. Japan’s JC-STAR similarly references ETSI EN 303 645 as its technical basis. This means the three regulatory regimes share a common foundation:

StandardRegionBased on
ETSI EN 303 645Europe (CRA)Original
CLS Level 2/3/4SingaporeETSI EN 303 645
JC-STARJapanETSI EN 303 645

By implementing Singapore CLS compliance first, the majority of the technical work directly satisfies Japan’s JC-STAR and Europe’s CRA requirements. The Singapore gateway is not just a regional target — it is the fastest path to global compliance coverage.

GCLI and the Direct Japan-Singapore MoC

Japan signed the Global Cyber Labelling Initiative (GCLI) in 2025, joining 10 other countries including Singapore, UK, Finland, Germany, and Korea. GCLI establishes mutual recognition between national IoT security labels — a product certified under Singapore CLS is recognised as compliant with Japan’s JC-STAR without re-certification. This is the structural mechanism that makes the “Singapore first” strategy work as a Japan entry path.

In March 2026, Japan and Singapore reinforced this with a direct bilateral Memorandum of Cooperation (MoC) between METI/IPA (Japan) and CSA (Singapore), establishing direct mutual recognition of JC-STAR and CLS labels. The MoC takes effect on 1 June 2026. Under this arrangement a valid, current JC-STAR label is accepted as-is under CLS — no re-derivation of CLS compliance from JC-STAR data is required. Japan is the fifth country to achieve bilateral mutual recognition with Singapore CLS (after Finland, Germany, South Korea, and the UK).

Open question: The official level equivalence table mapping JC-STAR levels (STAR-1 through STAR-4) to CLS star levels (1–4) has not yet been published by CSA/METI. Monitor the CSA CLS page and METI/IPA JC-STAR page for this detail — it determines which JC-STAR level satisfies a given CLS target level.

Additional bilateral MRAs exist between Singapore CLS and Finland, Germany, and Korea. For Japanese customers already holding German or Korean IoT certification, these MRAs provide a fast-track CLS path.

SS 711:2025 Design Principles

Singapore’s national IoT standard SS 711:2025 (which replaces TR 64:2018 and underpins CLS Level 3 assessments) defines four security design principles. EdgeSentry-RS is designed around these:

PrincipleRequirementImplementation
Secure by DefaultUnique device identity, signed OTAidentity.rs (Ed25519), update.rs (signed update verification)
Rigour in DefenceSTRIDE threat modelling, tamper detectionintegrity.rs (BLAKE3 hash chain), STRIDE threat model artifacts
AccountabilityAudit trail, operation logsingest/ (AuditLedger, OperationLog, IntegrityPolicyGate)
ResiliencyDeny-by-default networking, rate limitingingest/network_policy.rs (IP/CIDR allowlist)

Implementation Mapping

For the detailed clause-by-clause mapping of CLS / ETSI EN 303 645 / JC-STAR requirements to source code, see the Compliance Traceability Matrix.


OSS scope

This repository implements the OSS audit layer: Ed25519 signing, BLAKE3 hash chain, ISO 19650 schema, and the eds verification CLI. All milestones in this document are open-source.

Commercial connectors (immugate WORM storage, CLS/JC-STAR compliance module, HSM key storage) are tracked in the commercial compliance layer.


Phase 1: The Singapore Gateway (Current – 6 Months)

Target: CLS Level 2 → Level 3, SS 711:2025, iM8

Deliver a software reference implementation that satisfies Singapore CLS Level 2 cyber hygiene requirements and advances to Level 3 with the SDL evidence artifacts (threat model, SBOM, binary analysis) that IMDA assessors require.

Milestone 1.1: Identity & Integrity Core ✅ Implemented

  • edgesentry_rs::identity — Ed25519 device signature implementation
  • edgesentry_rs::integrity — BLAKE3 hash chain tamper-detection protocol
  • edgesentry_rs::ingest::NetworkPolicy — deny-by-default IP/CIDR allowlist (CLS-06)

Milestone 1.2: The C/C++ Bridge ✅ Implemented

  • edgesentry-bridge — C-compatible FFI layer exposing Ed25519 signing, signature verification, and hash-chain validation to C/C++ projects
  • Goal: inject Singapore-grade security into existing Japanese hardware (gateways, sensors) with minimal modification
  • See C/C++ FFI Bridge for usage, linking instructions, and memory safety conventions

Milestone 1.3: Compliance Mapping v1.0 ✅ Implemented

Milestone 1.4: SBOM + Vendor Disclosure Checklist ✅ Implemented

IMDA’s IoT Cyber Security Guide requires a vendor disclosure checklist as CLS Level 3 assessment evidence. The five mandatory categories are: encryption support, identification and authentication, data protection, network protection, and lifecycle support (SBOM).

  • CycloneDX JSON SBOM generated for all crates and published with each GitHub Release
  • Vendor disclosure checklist responses documented for all five categories
  • Responses mapped to implementation in the traceability matrix
  • See SBOM and Vendor Disclosure and #92

Milestone 1.5: Transport Layer, Async Ingest & Offline Buffer ✅ Implemented

  • async-ingest feature: AsyncIngestService<R,L,O> with &self signature for safe multi-task sharing via Arc — closed #115
  • transport-http feature: axum-based POST /api/v1/ingest endpoint; source IP gated through NetworkPolicy before crypto verification; eds serve CLI — closed #116
  • transport-tls feature: serve_tls() with rustls TLS 1.2/1.3; eds serve-tls --tls-cert / --tls-key CLI; satisfies CLS-05 HTTP channel confidentiality — closed #176
  • transport-mqtt-tls feature: MqttTlsConfig with CA cert path, rustls-backed MQTTS via rumqttc; eds serve-mqtt --tls-ca-cert CLI; satisfies CLS-05 MQTT channel confidentiality — closed #180
  • transport-mqtt feature: serve_mqtt() subscribes to a configurable topic, routes records through AsyncIngestService, publishes accept/reject to <topic>/response; eds serve-mqtt CLI — closed #146
  • buffer module: OfflineBuffer<S> store-and-forward with pluggable BufferStore trait; InMemoryBufferStore default; SqliteBufferStore behind buffer-sqlite feature; satisfies CLS-09 resilience — closed #74

Milestone 1.6: STRIDE Threat Model + Binary Analysis Evidence ✅ Implemented

CLS Level 3 assessors expect recorded design artifacts, not just code. SS 711:2025 requires STRIDE-based threat modelling of all attack surfaces (API, communication, storage).

  • STRIDE threat model covering: Spoofing (device identity), Tampering (audit records), Repudiation (operation logs), Information Disclosure (payload storage), Denial of Service (network policy), Elevation of Privilege (ingest gate) — see docs/src/threat_model.md
  • Binary analysis evidence confirming no known CVEs in shipped crates (cargo-audit, cargo-deny)
  • Threat model mitigations linked to traceability matrix entries — see docs/src/traceability.md (Rigour in Defence updated ✅)
  • Japanese translation available at docs/ja/src/threat_model.md
  • Closed: #93 via PR #143

Phase 2: Japan Adaptation via GCLI (6 – 12 Months)

Target: CLS Level 4, JC-STAR STAR-1/2, Cyber Trust Mark / ISO 27001

Milestone 2.0: Mutual Recognition Framework (GCLI + Japan-Singapore MoC) 🔲 Planned

Two complementary mechanisms enable Japan market entry without duplicate certification:

  1. GCLI — the multilateral framework (10+ countries) underpinning the overall Singapore-first strategy.
  2. Direct Japan-Singapore MoC (signed March 2026, effective 1 June 2026) — bilateral mutual recognition between JC-STAR and CLS. A valid JC-STAR label is accepted as-is under CLS; no re-mapping of certification data is required.

Deliverables for this milestone:

  • Compliance pathway guide covering both the GCLI route and the direct MoC route for Japan-based customers
  • JC-STAR label validation and attestation module (edgesentry_rs::compliance::jcstar) — see #121
  • CLS ↔ JC-STAR level equivalence table (pending publication by CSA/METI; monitor CSA and METI/IPA pages)
  • MRA fast-track guidance for customers holding Finnish, German, or Korean IoT certification
  • See #94

Milestone 2.1: JC-STAR STAR-1/2 Alignment 🔲 Planned

  • Self-checklist and implementation guidance based on Japan’s IoT Product Security Conformity Assessment criteria
  • See #82

Milestone 2.2: Edge Intelligence 🔲 Planned

  • edgesentry-summary — data summarisation logic for high-performance Japanese sensors over bandwidth-constrained links. See #83
  • edgesentry-detector — local anomaly detection with signed audit evidence attached to results. See #84

Milestone 2.3: Cross-Border Education Program 🔲 Planned

  • Joint technical white paper to help Japanese companies bid on Singapore public-infrastructure projects
  • See #85

Milestone 2.4: Cyber Trust Mark / ISO 27001 Organisational Track 🔲 Planned

Singapore’s Cyber Trust Mark becomes mandatory for Critical Information Infrastructure (CII) operators from 2026–27. It is the organisational counterpart to CLS (which is product-level). B2B and government customers in Singapore will increasingly require vendors to support this track.

  • Map EdgeSentry-RS implementation evidence to Cyber Trust Mark assessment categories
  • ISO 27001 control alignment documentation
  • See #95

Milestone 2.6: immugate WORM Storage Connector

Moved to the commercial compliance layer.


Milestone 2.7: ISO 19650 Information Container Schema 🔲 Planned

ISO 19650 defines the framework for managing information over the whole life cycle of a built asset using BIM. This milestone reframes each audit record as an ISO 19650 information container, enabling interoperability with third-party BIM tools and positioning the edgesentry-rs audit chain as a de facto standard for construction inspection traceability.

  • edgesentry_rs::audit::iso19650 — information container payload schema (OSS)
  • Structured BIM status transitions: WIP → Shared → Published, with signed state change records
  • Conformant metadata fields (revision, suitability, classification) mapped to the existing hash-chain record format
  • Interoperability documentation for third-party BIM tool integration
  • This milestone is the audit-crate implementation of the ISO 19650 layer described in the Inspect roadmap

Milestone 2.5: CLS(MD) — Medical Device Variant 🔲 Planned

Singapore launched CLS for Medical Devices (CLS(MD)) in October 2024. If medical IoT is a target market, specific variant requirements apply.

  • CLS(MD) gap analysis against current implementation
  • Medical device–specific requirements identification
  • See #96

Phase 3: Global Convergence — “The European Horizon” (12 – 24 Months)

Target: EU CRA, UK PSTI Act, IEC 62443-4-2 (CII/OT), CCoP 2.0

Milestone 3.1: EU CRA Compliance Research 🔲 Planned

  • Full mapping to ETSI EN 303 645 as a passport for the European market
  • The Singapore CLS foundation covers the majority of CRA requirements with minimal additional work

Milestone 3.2: UK PSTI Act Alignment 🔲 Planned

The UK Product Security and Telecommunications Infrastructure (PSTI) Act aligns with ETSI EN 303 645 and became effective January 2026. Given CLS compliance, this requires near-zero additional implementation.

  • Gap analysis between CLS Level 3 and UK PSTI requirements
  • PSTI compliance statement documentation
  • See #97

Milestone 3.3: IEC 62443-4-2 + Hardware RoT 🔲 Planned

IEC 62443-4-2 governs component-level requirements for Critical Infrastructure (CII) and OT markets. It requires hardware Root of Trust (TPM/HSM), RBAC, and Privileged Access Management (PAM) — distinct from ETSI EN 303 645.

  • IEC 62443-4-2 component requirement mapping
  • HSM integration via edgesentry-bridge for hardware-backed key storage (CLS Level 4)
  • RBAC/PAM design guidance for deployers
  • See #54 and #98

Milestone 3.4: CCoP 2.0 / MTCS Tier 3 🔲 Planned

Singapore’s Cybersecurity Code of Practice 2.0 (CCoP 2.0) is the operational compliance requirement for CII sectors. MTCS Tier 3 applies if the platform has cloud or SaaS components targeting government contracts.

  • CCoP 2.0 operational requirement mapping
  • MTCS Tier 3 applicability assessment for cloud deployment scenarios
  • See #99

Milestone 3.5: Formal Verification & Hardening 🔲 Planned

  • Advanced memory safety and vulnerability hardening to withstand third-party binary analysis required for CLS Level 4

Milestone 3.6: Reference Architecture for AI Robotics 🔲 Planned

  • Reference design for tamper-evident decision auditing in autonomous mobile robots (AMR) and inspection drones

Sustainable Ecosystem Strategy

Following the DuckDB model — a lightweight embeddable core that spreads via libraries rather than platforms:

  1. “In-Process” Security — Embed as a library inside existing C++ applications regardless of OS or hardware, just as DuckDB embeds inside Python and Java processes.

  2. Open Compliance — OSS the “how to achieve security” knowledge, so no single vendor controls the compliance pathway; the standard becomes public infrastructure.

  3. Collaborative Learning — Provide a shared Rust codebase as a cross-company learning environment to develop the next generation of IoT security engineers.

Compliance Traceability Matrix

This page maps each Singapore CLS / iM8 clause and corresponding ETSI EN 303 645 provision to the source code that satisfies it. Japan JC-STAR cross-references and SS 711:2025 design principle alignment are included for each row.

Legend:

  • ✅ Implemented
  • ⚠️ Partial
  • 🔲 Planned
  • ➖ Not in scope

SS 711:2025 Design Principles Coverage

Singapore’s national IoT standard SS 711:2025 defines four principles. See the Roadmap for the full module mapping.

PrincipleSS 711:2025 RequirementStatus
Secure by DefaultUnique device identity, signed OTA updatesidentity.rs, update.rs
Rigour in DefenceSTRIDE threat model, tamper detection✅ Hash chain (integrity.rs) + STRIDE threat model
AccountabilityAudit trail, operation logs, RBAC designingest/ (AuditLedger, OperationLog)
ResiliencyDeny-by-default networking, DoS protectioningest/network_policy.rs


CLS Level 3 / ETSI EN 303 645 — Core Requirements

CLS-01 / §5.1 — No universal default passwords

ItemDetail
JC-STARSTAR-1 R3.1
RequirementDevices must not use universal default credentials
Status➖ Out of scope — this project implements software audit records, not device credential management

CLS-02 / §5.2 — Implement a means to manage reports of vulnerabilities

ItemDetail
JC-STARSTAR-1 R4.1
RequirementA published, actionable vulnerability reporting channel with defined SLAs
Status✅ Implemented
ImplementationSECURITY.md — published disclosure policy with supported versions, private reporting via GitHub advisory, acknowledgement SLA (3 business days), patch SLA (30 days critical/high; 90 days medium/low), and defined in/out-of-scope
ImplementationGitHub private vulnerability reporting enabled — reporters use the Security Advisories form

CLS-03 / §5.3 — Keep software updated

ItemDetail
JC-STARSTAR-2 R2.2
RequirementSoftware update packages must be signed and verified before installation
Status✅ Implemented
ImplementationUpdateVerifier::verify checks BLAKE3 payload hash then Ed25519 publisher signature before allowing installation; failed checks are logged as UpdateVerifyDecision::Rejected in UpdateVerificationLog (src/update.rs)
Teststests/unit/update_tests.rs — covers accepted path, tampered payload, invalid signature, unknown publisher, multi-publisher isolation

CLS-04 / §5.4 — Securely store sensitive security parameters

ItemDetail
JC-STARSTAR-1 R1.2
RequirementPrivate keys must be stored securely; a key registration process must exist
Status✅ Implemented
ImplementationPublic key registry: IntegrityPolicyGate::register_device (src/ingest/policy.rs:20)
ImplementationKey generation CLI: eds keygen (src/lib.rs — generate_keypair)
ImplementationKey inspection CLI: eds inspect-key (src/lib.rs — inspect_key)
ImplementationProvisioning and rotation guidance: Key Management
NoteHSM-backed key storage (CLS Level 4) is planned in #54

CLS-05 / §5.5 — Communicate securely

ItemDetail
JC-STARSTAR-1 R1.1
RequirementData must be transmitted with authenticity guarantees
Status✅ Implemented
Implementation — record authenticityEvery AuditRecord carries an Ed25519 signature over its BLAKE3 payload hash — build_signed_record (src/agent.rs), sign_payload_hash (src/identity.rs:12)
Implementation — channel confidentiality (HTTP)transport-tls feature: serve_tls() with rustls TLS 1.2/1.3, IP allowlist enforced before handshake, eds serve-tls --tls-cert / --tls-key CLI — closed #176 (src/transport/tls.rs)
Implementation — channel confidentiality (MQTT)transport-mqtt-tls feature: MqttTlsConfig with CA cert path, rustls ClientConfig via rumqttc::TlsConfiguration::Rustls, eds serve-mqtt --tls-ca-cert CLI — closed #180 (src/transport/mqtt.rs)

CLS-06 / §5.6 — Minimise exposed attack surfaces

ItemDetail
JC-STARSTAR-1 R3.2
RequirementOnly necessary interfaces and services should be exposed
Status✅ Implemented
Implementation — IP allowlistNetworkPolicy provides deny-by-default IP/CIDR allowlist enforcement (src/ingest/network_policy.rs)
Implementation — HTTP transportingest_handler enforces NetworkPolicy::check(source_ip) before any crypto verification; returns 403 Forbidden for unlisted sources (src/transport/http.rs)
Implementation — MQTT transportserve_mqtt exposes a single subscribe-only topic; no administrative interface; broker-level ACLs recommended (src/transport/mqtt.rs)
NoteNetwork-level controls (VPN, firewall rules) remain the deployer’s responsibility

CLS-07 / §5.7 — Ensure software integrity

ItemDetail
JC-STARSTAR-1 R1.3
RequirementThe device must verify the integrity of software and data
Status✅ Implemented
Implementation — payload hashBLAKE3 hash over raw payload: compute_payload_hash (src/integrity.rs:12)
Implementation — hash chainprev_record_hash links each record to its predecessor; insertion/deletion detected by verify_chain (src/integrity.rs:35)
Teststampered_lift_demo_chain_is_detected (src/lib.rs:338)

CLS-08 / §5.8 — Ensure that personal data is secure

ItemDetail
JC-STARSTAR-2 R4.1
RequirementPersonal data transmitted or stored must be protected
Status➖ Out of scope — audit records do not contain personal data in the current implementation

CLS-09 / §5.9 — Make systems resilient to outages

ItemDetail
JC-STARSTAR-2 R3.2
RequirementThe device should remain operational and recover gracefully
Status⚠️ Partial
ImplementationOfflineBuffer<S> accumulates signed records during connectivity loss and replays them in insertion order via flush when the link recovers. Duplicate records from replay are treated as already-accepted and do not cause failures (src/buffer/mod.rs)
ImplementationPluggable BufferStore trait — volatile InMemoryBufferStore (default) and durable SqliteBufferStore behind the buffer-sqlite feature flag
GapFull HA (active–active failover, network-level redundancy) remains the deployer’s responsibility

CLS-10 / §5.10 — Examine system telemetry data

ItemDetail
JC-STARSTAR-2 R3.1
RequirementSecurity-relevant events must be logged and replay/reorder attacks must be detected
Status✅ Implemented
Implementation — sequenceStrict monotonic sequence per device; duplicates and out-of-order records rejected by IngestState::verify_and_accept (src/ingest/verify.rs:45)
Implementation — audit trailAccept/reject decisions persisted via IngestService and AuditLedger (src/ingest/storage.rs)

CLS-11 / §5.11 — Make it easy for users to delete user data

ItemDetail
JC-STAR
RequirementUsers should be able to delete personal data
Status➖ Out of scope

CLS Level 4 — Additional Requirements

CLS Level 4 — Hardware Security Module (HSM)

ItemDetail
JC-STARSTAR-2 R1.4
RequirementPrivate keys must be stored and used inside an HSM
Status🔲 Planned
GapHSM-backed key storage planned for Phase 3 (IEC 62443-4-2 / CII/OT). See #54 and #98

JC-STAR Additional Requirements

STAR-1 R2.1 — Replay and reorder prevention

ItemDetail
CLSCLS-10
RequirementReplay attacks must be detected and rejected
Status✅ Implemented
Implementationseen HashSet in IngestState rejects duplicate (device_id, sequence) pairs (src/ingest/verify.rs:56)

Coverage Summary

LevelTotal clauses✅ Implemented⚠️ Partial🔲 Planned➖ Out of scope
CLS Level 3116203
CLS Level 410010
JC-STAR additions11000

Note: “Out of scope” clauses cover device-level concerns (passwords, network interfaces, personal data) that are the responsibility of the deployer, not the audit-record library.

STRIDE Threat Model

This document is a formal threat-modelling artifact produced for Singapore CLS Level 3 assessment under SS 711:2025 Rigour in Defence and the IMDA IoT Cyber Security Guide threat-modelling checklist. It covers all attack surfaces of the EdgeSentry-RS system: API, communication channel, and storage.

Methodology: STRIDE (Microsoft) Scope: edgesentry-rs library and edgesentry-bridge FFI crate — device-side signing, cloud-side ingest, HTTP transport, operation log, and audit ledger. Assessor reference: SS 711:2025 §4.2 Rigour in Defence; IMDA IoT Cyber Security Guide §3 Threat Modelling Checklist


System Overview

┌─────────────────────────────────────────────────────────────────┐
│  Field Device (edge)                                            │
│  ┌────────────────────────────────────────────────────────────┐ │
│  │  build_signed_record()                                     │ │
│  │  payload → BLAKE3 hash → Ed25519 sign → AuditRecord       │ │
│  └────────────────────────────────────────────────────────────┘ │
└────────────────────────────┬────────────────────────────────────┘
                             │ POST /api/v1/ingest (JSON over HTTPS)
                             ▼
┌─────────────────────────────────────────────────────────────────┐
│  Cloud Ingest Layer                                             │
│  ┌────────────────┐  ┌─────────────────┐  ┌─────────────────┐  │
│  │ NetworkPolicy  │  │ IntegrityPolicy │  │ AsyncIngest     │  │
│  │ IP/CIDR gate   │→ │ Gate            │→ │ Service         │  │
│  │ (deny-default) │  │ (signature +    │  │ (hash chain +   │  │
│  └────────────────┘  │  chain verify)  │  │  sequence)      │  │
│                      └─────────────────┘  └────────┬────────┘  │
│                                                     │           │
│            ┌────────────────────────────────────────┤           │
│            ▼                          ▼             ▼           │
│  ┌──────────────────┐  ┌─────────────────────┐  ┌──────────┐   │
│  │  Raw Data Store  │  │  Audit Ledger       │  │ Op. Log  │   │
│  │  (S3 / memory)   │  │  (Postgres / memory)│  │          │   │
│  └──────────────────┘  └─────────────────────┘  └──────────┘   │
└─────────────────────────────────────────────────────────────────┘

STRIDE Threat Analysis

S — Spoofing (Device Identity)

Threat: An attacker impersonates a legitimate field device by forging the device_id field or replaying records signed by a compromised key.

Attack surface: POST /api/v1/ingestAuditRecord.device_id and AuditRecord.signature fields.

Sub-threatDescription
S-1Attacker sends records with a valid device_id but self-generated Ed25519 key (unregistered)
S-2Attacker replays a previously captured, legitimately signed record
S-3Attacker sends records with a forged device_id that does not match the signing key

Mitigations:

IDMitigationCode location
M-S-1Device public keys are pre-registered on the cloud side; any signature that does not verify against the registered key is rejected with IngestError::UnknownDeviceingest/policy.rs IntegrityPolicyGate::enforce()
M-S-2Monotonic sequence numbers and prev_record_hash chain continuity are enforced; replayed records are detected as duplicate sequencesingest/verify.rs check_sequence()
M-S-3Ed25519 signatures bind the payload hash to the private key; a forged device_id with the wrong key fails signature verificationidentity.rs verify_payload_signature()

Residual risk: If a device’s private key is physically extracted, records can be forged with valid signatures. Hardware-backed key storage (TPM/SE) is a device-layer control outside the scope of this library; it is noted in the Roadmap.


T — Tampering (Audit Records)

Threat: An attacker modifies an audit record or its raw payload in transit or at rest.

Attack surface: Wire format (JSON body), raw data store (S3 objects), audit ledger (database rows).

Sub-threatDescription
T-1Attacker modifies raw_payload_hex in the HTTP request body
T-2Attacker modifies AuditRecord.payload_hash to match a different payload
T-3Attacker flips bytes in a stored S3 object after accepted ingest
T-4Attacker modifies prev_record_hash to break or redirect the chain

Mitigations:

IDMitigationCode location
M-T-1On every ingest the cloud recomputes BLAKE3(raw_payload) and compares it to record.payload_hash; mismatch → PayloadHashMismatch rejectioningest/storage.rs IngestService::ingest()
M-T-2payload_hash is covered by the Ed25519 signature; if the hash is changed the signature no longer verifiesidentity.rs verify_payload_signature()
M-T-3Post-ingest tampering of stored objects is detectable by re-verifying the hash from the ledger against the object content; this is an operational control described in the Operations Runbook
M-T-4prev_record_hash is validated against the previous accepted record’s hash(); a break in continuity rejects all subsequent recordsingest/verify.rs check_chain_link()

Residual risk: Tampering of stored objects after acceptance is a storage-layer concern. Enabling S3 Object Lock (WORM) or database row-level checksums at the deployment layer eliminates this residual.


R — Repudiation (Operation Logs)

Threat: A device or operator denies that a specific ingest event occurred, or claims a record was never sent / was rejected without evidence.

Attack surface: OperationLog entries written during ingest; audit ledger append operations.

Sub-threatDescription
R-1Device claims a record was never submitted
R-2Operator claims a record was rejected when it was accepted (or vice versa)
R-3Operation log entries are deleted or modified after the fact

Mitigations:

IDMitigationCode location
M-R-1Every ingest attempt — accepted or rejected — writes an OperationLogEntry with device_id, sequence, decision, and message; the log is written before the ingest function returnsingest/storage.rs log_acceptance() / log_rejection()
M-R-2IngestDecision::Accepted / Rejected is persisted to the operation log atomically with the decision; the record’s signed hash serves as cryptographic proof of submissioningest/storage.rs OperationLogEntry
M-R-3Append-only operation logs (Postgres INSERT-only pattern; no DELETE/UPDATE on log rows) prevent after-the-fact modificationingest/storage.rs PostgresOperationLog; enforcement at the DB-user permission level

Residual risk: The library provides the operation log data; protecting that data from privileged insider deletion requires database-level controls (role separation, audit logging at the DB layer).


I — Information Disclosure (Payload Storage)

Threat: Sensitive inspection payload data is exposed to an unauthorised party.

Attack surface: HTTP request body (raw_payload_hex), raw data store (S3), audit ledger, operation log.

Sub-threatDescription
I-1Eavesdropping on the HTTP transport channel
I-2Unauthorised read access to S3 objects or Postgres rows
I-3Payload bytes appear in error messages or logs

Mitigations:

IDMitigationCode location
M-I-1The HTTP transport is designed to run behind TLS termination (load balancer / Nginx / Cloudflare); raw payload is hex-encoded in the JSON body and must be carried over HTTPStransport/http.rs — TLS is a deployment-layer control; noted in Operations Runbook
M-I-2Raw payloads are stored by object_ref under the caller-specified key; access control is enforced by the storage layer (S3 bucket policy, Postgres GRANT); the library does not expose read APIs to unauthenticated callersingest/storage.rs RawDataStore::put()
M-I-3Error messages include device_id and sequence but never the raw payload bytes; tracing spans log payload_bytes length onlyingest/storage.rs #[instrument(skip(raw_payload))]

Residual risk: Encryption at rest for S3 objects and Postgres rows is a deployment-layer control (S3 SSE-KMS, Postgres pgcrypto or TDE). TLS 1.3 for the ingest HTTP endpoint is addressed in the Roadmap (issue #73).


D — Denial of Service (Network Policy)

Threat: An attacker floods the ingest endpoint to exhaust resources and prevent legitimate devices from submitting records.

Attack surface: POST /api/v1/ingest HTTP endpoint; NetworkPolicy check; AsyncIngestService tokio task pool.

Sub-threatDescription
D-1High-volume requests from untrusted IPs overwhelm the handler
D-2Large raw_payload_hex values exhaust memory
D-3Malformed JSON bodies consume parse time

Mitigations:

IDMitigationCode location
M-D-1NetworkPolicy deny-by-default: all IPs and CIDR ranges are blocked unless explicitly allowlisted; unapproved source IPs receive 403 Forbidden before any cryptographic work is performedingest/network_policy.rs NetworkPolicy::check(); transport/http.rs handler
M-D-2Axum’s default request body size limit (2 MB) caps payload size; the raw_payload_hex field is bounded by the HTTP body limittransport/http.rs — axum default body limit
M-D-3JSON deserialization errors return 400 Bad Request immediately; no downstream processing occurstransport/http.rs — axum Json extractor

Residual risk: Rate limiting per source IP and per device is not yet implemented in the library layer; it should be added at the reverse proxy or API gateway layer in production deployments. Issue #73 (TLS, P2) is the planned follow-up milestone.


E — Elevation of Privilege (Ingest Gate)

Threat: An attacker bypasses the ingest validation gate to write arbitrary records to the ledger or raw data store.

Attack surface: IntegrityPolicyGate, ingest_handler, and the service registration API (register_device).

Sub-threatDescription
E-1Attacker calls ingest with a record for an unregistered device and succeeds
E-2Attacker submits a record with a valid sequence/chain for a device they do not control
E-3Attacker registers a malicious device by calling register_device directly

Mitigations:

IDMitigationCode location
M-E-1IntegrityPolicyGate::enforce() is called unconditionally before any storage write; unknown devices fail with IngestError::UnknownDeviceingest/policy.rs
M-E-2Signature verification uses the registered public key for device_id; a valid chain cannot be forged without the device’s private keyidentity.rs verify_payload_signature()
M-E-3register_device is a privileged operation called only by the application layer at startup; the HTTP ingest handler does not expose device registration over the networktransport/http.rs — no registration endpoint; ingest/storage.rs AsyncIngestService::register_device()

Residual risk: If the application layer that calls register_device is compromised, arbitrary devices can be registered. This is an operational security control: registration should be gated behind a separate privileged API with strong authentication.


Binary Analysis Evidence

cargo audit — Advisory Database Scan

Command and output captured at document generation time (advisory database commit: current):

cargo audit

Result: All detected advisories are pre-approved in deny.toml (see table below):

AdvisoryCrateVersionStatusReason
RUSTSEC-2026-0049rustls-webpki0.101.7Ignored (#125)Pinned by aws-smithy-http-client legacy hyper-rustls 0.24rustls 0.21 chain; no 0.101.x patch exists. The 0.103.x instance in the tree is updated to 0.103.10.
RUSTSEC-2026-0049rustls-webpki0.102.8Ignored (#166)Pinned by rumqttc 0.25rustls 0.22 chain; fix requires rumqttc to adopt rustls 0.23+. No CRL revocation calls in the codebase; unexploitable as-is.

All remaining scanned crate dependencies: no known CVEs.

To reproduce:

cargo install cargo-audit --locked
cargo audit

cargo deny check — Policy Enforcement

Command:

cargo deny check

Result: advisories ok, bans ok, licenses ok, sources ok

The deny.toml policy enforces:

  • Advisories: all vulnerabilities denied by default except explicitly ignored entries with documented reasons
  • Bans: multiple crate versions warned; wildcard dependencies warned
  • Licenses: only MIT, Apache-2.0, BSD-2-Clause, BSD-3-Clause, Unicode-3.0, CC0-1.0, Zlib permitted; one exception: cbindgen (MPL-2.0, build-only header generator — copyleft does not extend to generated artifacts or source)
  • Sources: only crates.io and trusted git sources

To reproduce:

cargo install cargo-deny --locked
cargo deny check

Threat-to-Mitigation Traceability Summary

STRIDE CategoryThreat IDMitigation IDSource FileStatus
SpoofingS-1M-S-1ingest/policy.rs
SpoofingS-2M-S-2ingest/verify.rs
SpoofingS-3M-S-3identity.rs
TamperingT-1M-T-1ingest/storage.rs
TamperingT-2M-T-2identity.rs
TamperingT-3M-T-3Operational control⚠️ Deployment
TamperingT-4M-T-4ingest/verify.rs
RepudiationR-1M-R-1ingest/storage.rs
RepudiationR-2M-R-2ingest/storage.rs
RepudiationR-3M-R-3DB permission layer⚠️ Deployment
Information DisclosureI-1M-I-1Deployment (TLS)⚠️ #73
Information DisclosureI-2M-I-2Storage access control⚠️ Deployment
Information DisclosureI-3M-I-3ingest/storage.rs
Denial of ServiceD-1M-D-1ingest/network_policy.rs, transport/http.rs
Denial of ServiceD-2M-D-2transport/http.rs (axum body limit)
Denial of ServiceD-3M-D-3transport/http.rs
Elevation of PrivilegeE-1M-E-1ingest/policy.rs
Elevation of PrivilegeE-2M-E-2identity.rs
Elevation of PrivilegeE-3M-E-3transport/http.rs

Legend: ✅ Implemented in library code — ⚠️ Deployment-layer control (outside library scope)

SBOM and Vendor Disclosure Checklist

This page satisfies the IMDA IoT Cyber Security Guide lifecycle support evidence requirement for Singapore CLS Level 3 assessment. It covers the SBOM format, generation procedure, and vendor disclosure checklist responses for the five mandatory categories.


Software Bill of Materials (SBOM)

Format

EdgeSentry-RS publishes SBOMs in CycloneDX JSON format (spec version 1.3), generated from Cargo.lock at release time using cargo-cyclonedx.

Published artifacts

Each GitHub Release includes two SBOM files as release assets. Download them from the Releases page:

https://github.com/edgesentry/edgesentry-rs/releases/tag/v<version>
FileScope
edgesentry-rs-<version>.cdx.jsonedgesentry-rs crate and all transitive dependencies
edgesentry-bridge-<version>.cdx.jsonedgesentry-bridge C/C++ FFI crate and its dependencies

For example, for v0.1.2:

  • https://github.com/edgesentry/edgesentry-rs/releases/download/v0.1.2/edgesentry-rs-0.1.2.cdx.json
  • https://github.com/edgesentry/edgesentry-rs/releases/download/v0.1.2/edgesentry-bridge-0.1.2.cdx.json

Generating the SBOM locally

cargo install cargo-cyclonedx --locked
cargo cyclonedx --format json --all
# Output: crates/edgesentry-rs/edgesentry-rs.cdx.json
#         crates/edgesentry-bridge/edgesentry-bridge.cdx.json

Inspecting dependency counts

Run after generating to see the current component count (changes with every dependency update):

cargo cyclonedx --format json --all
python3 -c "
import json
for f in ['crates/edgesentry-rs/edgesentry-rs.cdx.json',
          'crates/edgesentry-bridge/edgesentry-bridge.cdx.json']:
    bom = json.load(open(f))
    print(f\"{f}: {len(bom.get('components', []))} components\")
"

Continuous supply-chain monitoring

  • cargo-audit — run on every CI build and PR; checks all dependencies against the RustSec Advisory Database
  • cargo-deny — enforces licence policy and bans on every CI build
  • Dependabot — weekly automated dependency version update PRs

Vendor Disclosure Checklist

The IMDA IoT Cyber Security Guide requires responses across five categories. The table below documents EdgeSentry-RS’s position for each.

1. Encryption Support

ItemResponse
Algorithms usedEd25519 (signing), BLAKE3 (hashing)
Key lengthEd25519: 256-bit; BLAKE3 output: 256-bit
Random number generationOS CSPRNG via rand::OsRng — no custom RNG
Transport encryptionRecord-level: Ed25519 signature over payload hash. Native TLS transport is provided: eds serve-tls --tls-cert / --tls-key (rustls TLS 1.2/1.3, HTTP) and eds serve-mqtt --tls-ca-cert (MQTT over TLS). See CLS-05 in the Traceability Matrix.
Key storagePublic-key registry in memory (IntegrityPolicyGate); private key files managed by the deployer. HSM-backed storage planned: #54
Implementationcrates/edgesentry-rs/src/identity.rs, crates/edgesentry-rs/src/integrity.rs

2. Identification and Authentication

ItemResponse
Device authentication methodEd25519 asymmetric key pair: device signs each record with its private key; cloud verifies against the registered public key
Credential storagePrivate key held exclusively on the device; public key registered on the cloud side via IntegrityPolicyGate::register_device
Default credentialsNone — each device generates a unique keypair via eds keygen
Brute-force protectionSignature verification is a single constant-time operation; no credential-based login surface exists
Route identity enforcementcert_identity parameter in IngestService::ingest — mismatch between TLS client certificate identity and record.device_id causes immediate rejection
Implementationcrates/edgesentry-rs/src/identity.rs, crates/edgesentry-rs/src/ingest/policy.rs

3. Data Protection

ItemResponse
Data in transitEvery AuditRecord carries an Ed25519 signature over its BLAKE3 payload hash — authenticity guaranteed at the record level regardless of transport
Data at restRaw payloads stored via RawDataStore (S3/MinIO); audit records via AuditLedger (PostgreSQL). Encryption at rest is the deployer’s responsibility (S3 SSE, Postgres column encryption)
Personal dataAuditRecord contains no personal data fields by design — object_ref points to a storage key; the payload body is stored separately
Data minimisationAudit metadata (payload_hash, signature, prev_record_hash) is separated from payload body — cloud stores only the hash chain; raw data stored independently via object_ref
Implementationcrates/edgesentry-rs/src/record.rs, crates/edgesentry-rs/src/ingest/storage.rs

4. Network Protection

ItemResponse
Unnecessary ports/servicesLibrary only — no network service is opened by edgesentry-rs. Transport is the deployer’s responsibility
Deny-by-default network policyNetworkPolicy enforces an IP/CIDR allowlist; check(source_ip) is called before any cryptographic operation — all unlisted sources are rejected
DoS resilienceNetworkPolicy gate rejects unlisted sources before any cryptographic processing, limiting the attack surface. Full rate-limiting is a deployer concern
Implementationcrates/edgesentry-rs/src/ingest/network_policy.rs
CLS referenceCLS-06 / ETSI EN 303 645 §5.6

5. Lifecycle Support

ItemResponse
Vulnerability reportingGitHub private vulnerability reporting enabled. See SECURITY.md — SLA: acknowledge 3 business days; patch 30 days (critical/high), 90 days (medium/low)
SBOM availabilityCycloneDX JSON published with every GitHub Release (see above)
Dependency advisory scanningcargo-audit on every CI build + PR against RustSec Advisory DB
End-of-life policyedgesentry-rs v0.x: current version supported. Security updates are patch releases
Software update integrityUpdateVerifier checks BLAKE3 payload hash and Ed25519 publisher signature before any update is applied — see CLS-03
Supported versionsSee SECURITY.md
CLS referenceCLS-02 / ETSI EN 303 645 §5.2

Traceability

This document satisfies Milestone 1.4 in the Roadmap. For the full clause-by-clause compliance mapping see the Compliance Traceability Matrix.

Concepts in edgesentry-rs

This document summarizes the core concepts used in this repository.

1. Tamper-evident design

The primary goal is not “perfect tamper prevention,” but “reliable tamper detection.”

  • Compute a hash from the original payload
  • Sign the hash with a device private key
  • Link records through a hash chain

Together, these mechanisms detect tampering, spoofing, and record reordering.

2. AuditRecord

The basic unit of evidence is AuditRecord. Key fields:

  • device_id: source device identity
  • sequence: monotonically increasing sequence number
  • timestamp_ms: event timestamp
  • payload_hash: hash of raw payload data
  • signature: signature over payload_hash
  • prev_record_hash: hash of the previous audit record
  • object_ref: reference to raw payload storage (for example, s3://...)

3. Hash and signature

3.1 Hash (integrity)

  • Purpose: fingerprint of payload content
  • Property: even a 1-byte payload change produces a different hash

3.2 Signature (authenticity)

  • Purpose: prove the payload hash was produced by a trusted device key
  • Verification: validate with the registered device public key

4. Hash chain continuity

Records are linked by prev_record_hash.

  • First record: prev_record_hash = zero_hash
  • Subsequent records: must match the previous record’s hash()

This detects insertion, deletion, and substitution inside the chain.

5. Sequence policy

sequence must increase per device as 1, 2, 3, …

  • Duplicate sequence values are rejected
  • Gaps or out-of-order sequences are rejected

6. Software update integrity

Before a device applies any firmware or software update, the update package must pass two checks via edgesentry_rs::update::UpdateVerifier:

  1. Payload hashBLAKE3(raw_payload) must match the hash embedded in the SoftwareUpdate manifest
  2. Publisher signature — the Ed25519 signature over that hash must verify against a registered trusted publisher key

Every attempt (accepted or rejected) is appended to UpdateVerificationLog for auditing. This satisfies CLS-03 / ETSI EN 303 645 §5.3 / JC-STAR STAR-2 R2.2.

7. Network policy (deny-by-default)

edgesentry_rs::ingest::NetworkPolicy enforces a deny-by-default IP/CIDR allowlist for incoming connections. Callers call NetworkPolicy::check(source_ip) before passing a record to IngestService. Connections from unlisted addresses are rejected without reaching any cryptographic check.

Rules are additive: allow_ip(addr) for exact matches and allow_cidr("10.0.0.0/8") for CIDR blocks (IPv4 and IPv6). An empty policy denies everything.

8. Ingest-time verification

edgesentry_rs::ingest is responsible for completing trust checks before persistence.

The full check order when ingesting a record is:

  1. Network gateNetworkPolicy::check(source_ip) denies unlisted sources before any crypto runs
  2. Payload hashIngestService verifies raw payload matches record.payload_hash
  3. Route identitycert_identity must match record.device_id when present
  4. Signature — payload hash must be signed by the registered device key
  5. Sequence — must be strictly monotonic and non-duplicate per device
  6. Previous-record hash — must chain from the last accepted record’s hash

Steps 3–6 are enforced by IntegrityPolicyGate; step 2 by IngestService before invoking the gate.

9. Storage model

On accepted ingest, the system stores:

  • Raw data (payload body)
  • Audit ledger (audit record stream)
  • Operation log (accept/reject decisions)

This separation keeps evidence metadata and payload storage independently manageable.

10. Demo modes

10.1 Library example (no DB/MinIO required)

  • Run: cargo run -p edgesentry-rs --example lift_inspection_flow
  • Uses in-memory stores
  • Fast path to verify signing, ingest verification, and tamper rejection

10.2 Interactive local demo (DB/MinIO required)

  • Run: bash scripts/local_demo.sh
  • End-to-end flow with PostgreSQL + MinIO + CLI
  • Shows persisted audit records and operation logs

11. Trust boundary

  • Device side: signs facts and emits compact audit metadata
  • Cloud side: enforces strict verification rules before accepting data

This split keeps edge and cloud responsibilities clear and auditable.

12. Quality and release concepts

  • Static analysis: clippy
  • OSS license policy validation: cargo-deny
  • Advisory scanning: cargo-audit (CVE checks against RustSec advisory DB)
  • Release readiness: CI + release workflows
  • Tag-driven release: vX.Y.Z

See Contributing and Build and Release for executable procedures.

13. STRIDE threat model

SS 711:2025 and the IMDA IoT Cyber Security Guide require recorded STRIDE-based threat model artifacts for CLS Level 3 assessment. The six threat categories map to EdgeSentry-RS attack surfaces as follows:

ThreatAttack surfaceMitigation
SpoofingDevice identityEd25519 signature — only the registered public key can verify a record
TamperingAudit records, payload storageBLAKE3 hash chain — any modification breaks chain continuity
RepudiationIngest decisionsOperationLog records every accept/reject decision with reason
Information DisclosureRaw payload storageobject_ref separation keeps payload body out of the audit metadata stream
Denial of ServiceIngest endpointNetworkPolicy deny-by-default rejects unlisted sources before any crypto runs
Elevation of PrivilegeIngest gateIntegrityPolicyGate verifies device registration and signature before accepting data

Producing the formal design artifact for CLS Level 3 assessment is tracked in #93.

14. SBOM (Software Bill of Materials)

A Software Bill of Materials lists all software components and their versions used in a product. The IMDA IoT Cyber Security Guide requires SBOM availability as part of the lifecycle support category in the vendor disclosure checklist — a mandatory CLS Level 3 evidence artifact.

For Rust projects, SBOM is generated from Cargo.lock using tools such as cargo-sbom or cargo-cyclonedx, producing a machine-readable inventory of all crates and their transitive dependencies.

Generating and publishing the SBOM alongside the vendor disclosure checklist is tracked in #92.

Architecture

Device Side vs Cloud Side

This system assumes a public-infrastructure IoT deployment where field devices (for example, lift inspection devices) send inspection evidence to cloud services.

Device side (resource-constrained edge)

The device-side responsibility is implemented by edgesentry_rs::build_signed_record and related functions.

  • Generate inspection event payloads (door check, vibration check, emergency brake check)
  • Compute payload_hash (BLAKE3)
  • Sign the hash using an Ed25519 private key
  • Link each event to the previous record hash (prev_record_hash) so records form a chain
  • Send only compact audit metadata plus object reference (object_ref) to keep edge-side cost low

Cloud side (verification and trust enforcement)

The cloud-side responsibility is implemented by edgesentry_rs::ingest and related modules.

  • Gate incoming connections to approved IP addresses and CIDR ranges (NetworkPolicy::check) — deny-by-default
  • Verify that the device is known (device_id -> public key)
  • Verify signature validity for each incoming record
  • Enforce sequence monotonicity and reject duplicates
  • Enforce hash-chain continuity (prev_record_hash must match previous record hash)
  • Reject tampered, replayed, or reordered data before persistence

Shared trust logic

All hashing and verification rules live in the same edgesentry-rs crate, keeping logic identical across edge and cloud usage.

Resource-Constrained Device Design

The device-side design is intentionally lightweight so it can be adapted to Cortex-M class environments.

  • Small cryptographic footprint: records store fixed-size hashes ([u8; 32]) and signatures ([u8; 64])
  • Minimal compute path: hash and sign only; no heavy server-side validation logic on device
  • Compact wire format readiness: record structure is deterministic and serializable (serde + postcard support in core)
  • Offload heavy work to cloud: duplicate detection, sequence policy checks, and full-chain verification are cloud concerns
  • Tamper-evident by construction: a one-byte modification breaks signature checks or chain continuity

Concrete Design Flow

  1. Device creates event payload D.
  2. Device computes H = hash(D) and signs H → signature S.
  3. Device emits AuditRecord { device_id, sequence, timestamp_ms, payload_hash=H, signature=S, prev_record_hash, object_ref }.
  4. Cloud verifies signature with registered public key.
  5. Cloud verifies sequence and previous-hash link.
  6. If any check fails, ingest is rejected; otherwise the record is accepted.

In short, the edge signs facts, and the cloud enforces continuity and authenticity.

Notarization Metadata Schema

For AI inference results to serve as legally admissible evidence (BCA/CONQUAS inspection reports, MPA ship certificates, MLIT near-visual-inspection equivalence), the audit record payload must capture five categories of provenance metadata in addition to the cryptographic chain. This is the target schema for the notarization connector.

CategoryFieldsPurpose
Sensorsensor_id, calibration_ts, firmware_version, sampling_rateProve the measuring instrument was calibrated and operating within spec at capture time
AI modelmodel_uuid, model_arch, weight_sha256, prompt_versionEnable third-party reproduction of the same inference output from the same input (AI Verify Outcome 3.1 / 3.5)
Compute environmentdevice_type, os_version, dependency_hashes, hw_temp_cFull runtime reproducibility; hardware temperature flags thermal throttling that could affect inference timing
Contextntp_ts, gps_lat_lon (or indoor position), input_data_hashBind the record to a specific physical location and moment; input_data_hash prevents payload substitution
Inference processconfidence_score, preprocessing_algo, guardrail_actionsSupport human-in-the-loop triage (AI Verify Outcome 4.5); low-confidence records can be routed for manual review

These fields are stored in the payload object alongside the domain-specific detection data. The payload_hash in AuditRecord covers the entire payload, so any metadata field change invalidates the signature.

ALCOA+ alignment: The five categories map directly to the ALCOA+ data integrity framework required for regulatory submissions — Attributable (sensor/model identity), Legible (structured JSON), Contemporaneous (ntp_ts), Original (input_data_hash), Accurate (weight_sha256, calibration_ts), plus Complete, Consistent, Enduring, and Available (covered by the WORM storage connector).

Ingest Service: Sync and Async Paths

edgesentry-rs provides two orchestration service types for cloud-side ingest, selectable by feature flag:

TypeFeature flagThread modelSuitable for
IngestService(always available)Blocking / syncEmbedded, CLI tools, embedded runtimes
AsyncIngestServiceasync-ingestasync/await (tokio)HTTP servers, async pipelines

Sync path (IngestService)

The synchronous service is the default and requires no additional features. S3 writes (when s3 feature is active) are performed by block_on-ing inside an embedded tokio::runtime::Runtime. This is appropriate for single-threaded tools and embedded environments.

#![allow(unused)]
fn main() {
let mut svc = IngestService::new(policy, raw_store, ledger, op_log);
svc.register_device("lift-01", verifying_key);
svc.ingest(record, payload, None)?;
}

Async path (AsyncIngestService)

Enable with features = ["async-ingest"]. All storage calls use .await so the calling thread is never blocked, enabling high-concurrency pipelines. The policy gate is wrapped in a tokio::sync::Mutex so the service can be shared across tasks via Arc.

#![allow(unused)]
fn main() {
let svc = Arc::new(AsyncIngestService::new(policy, raw_store, ledger, op_log));
svc.register_device("lift-01", verifying_key).await;
svc.ingest(record, payload, None).await?;
}

When s3 and async-ingest are both active, S3CompatibleRawDataStore implements AsyncRawDataStore by calling the AWS SDK future directly — no embedded runtime needed.

Feature flag summary

FlagWhat it adds
async-ingestAsyncRawDataStore, AsyncAuditLedger, AsyncOperationLogStore traits; AsyncIngestService; in-memory async stores; tokio (sync + macros)
s3S3CompatibleRawDataStore (sync); when combined with async-ingest, also implements AsyncRawDataStore
postgresPostgresAuditLedger, PostgresOperationLog (sync)
transport-httptransport::http::serve() — axum-based POST /api/v1/ingest server; eds serve CLI subcommand
transport-mqtttransport::mqtt::serve_mqtt() — async rumqttc event loop; subscribes to a topic, routes records through AsyncIngestService, publishes accept/reject responses

Transport Layer

The transport module provides network-facing ingest endpoints built on top of AsyncIngestService.

HTTP (transport-http feature)

Enable with features = ["transport-http"]. This brings in axum 0.8 and exposes a single POST /api/v1/ingest endpoint.

Request / Response

FieldTypeDescription
recordAuditRecord (JSON)The signed audit record from the device
raw_payload_hexStringHex-encoded raw payload bytes
StatusMeaning
202 AcceptedRecord passed all checks and was stored
400 Bad Requestraw_payload_hex is not valid hex
403 ForbiddenClient IP is not in the NetworkPolicy allowlist
422 Unprocessable EntityRecord failed signature, hash, or chain verification

Usage

#![allow(unused)]
fn main() {
use edgesentry_rs::{
    AsyncIngestService, AsyncInMemoryRawDataStore, AsyncInMemoryAuditLedger,
    AsyncInMemoryOperationLog, IntegrityPolicyGate, NetworkPolicy,
};
use edgesentry_rs::transport::http::serve;

let mut policy = IntegrityPolicyGate::new();
policy.register_device("lift-01", verifying_key);

let mut network_policy = NetworkPolicy::new();
network_policy.allow_cidr("10.0.0.0/8").unwrap();

let service = AsyncIngestService::new(
    policy,
    AsyncInMemoryRawDataStore::default(),
    AsyncInMemoryAuditLedger::default(),
    AsyncInMemoryOperationLog::default(),
);

let addr = "0.0.0.0:8080".parse().unwrap();
serve(service, network_policy, addr).await?;
}

CLI

eds serve \
  --addr 0.0.0.0:8080 \
  --allowed-sources 10.0.0.0/8,127.0.0.1 \
  --device lift-01=<pubkey_hex>

MQTT (transport-mqtt feature)

Enable with features = ["transport-mqtt"]. This brings in rumqttc and exposes serve_mqtt() — a fully async event loop that connects to an MQTT broker, subscribes to a configurable ingest topic, and routes every incoming message through AsyncIngestService.

The message format is the same JSON envelope used by the HTTP transport:

{ "record": { "device_id": "...", "sequence": 1, ... }, "raw_payload_hex": "deadbeef..." }

Accept / reject outcomes are published on <topic>/response:

{ "device_id": "...", "sequence": 1, "status": "accepted" }
{ "device_id": "...", "sequence": 1, "status": "rejected", "error": "..." }

Usage

#![allow(unused)]
fn main() {
use edgesentry_rs::transport::mqtt::{MqttIngestConfig, serve_mqtt};
use edgesentry_rs::{
    AsyncIngestService, AsyncInMemoryRawDataStore, AsyncInMemoryAuditLedger,
    AsyncInMemoryOperationLog, IntegrityPolicyGate,
};

let service = AsyncIngestService::new(
    IntegrityPolicyGate::new(),
    AsyncInMemoryRawDataStore::default(),
    AsyncInMemoryAuditLedger::default(),
    AsyncInMemoryOperationLog::default(),
);

let config = MqttIngestConfig::new("mqtt.example.com", "devices/+/ingest", "edgesentry-cloud");
serve_mqtt(config, service).await?;
}

serve_mqtt runs until the broker connection is lost, returning MqttServeError::EventLoop. Wrap the call in a retry loop for automatic reconnection.

Key behaviors

BehaviorDetail
Malformed JSONMessage is logged and discarded; event loop continues
Invalid hex payloadMessage is logged and discarded; event loop continues
Ingest rejectionResponse published on <topic>/response with "status": "rejected"
Response publish failureLogged as a warning; does not stop the event loop

Library Usage Example

Run the end-to-end lift inspection example implemented directly with library APIs:

Prerequisites:

  • Rust toolchain (cargo)
  • PostgreSQL / MinIO are not required for this example (it uses in-memory stores)
cargo run -p edgesentry-rs --example lift_inspection_flow

Scenario covered by the sample:

  1. Register one lift device public key in IntegrityPolicyGate
  2. Generate three signed inspection records with build_signed_record
  3. Ingest all records via IngestService (accepted path)
  4. Tamper one record (payload_hash) and confirm rejection
  5. Print stored audit records and operation logs

What it demonstrates:

  • Record signing with edgesentry_rs::build_signed_record
  • Ingestion verification with edgesentry_rs::ingest::IngestService
  • Tampering rejection (modified payload_hash)
  • Audit records and operation-log output

Source:

  • crates/edgesentry-rs/examples/lift_inspection_flow.rs

Three-Role Distributed Demo

For a more realistic view of the edge-to-cloud flow, three separate examples can be run in sequence. Each example owns exactly one role:

ExampleRoleExternal deps
edge_deviceSigns records, writes /tmp/eds_*.jsonNone
edge_gatewayRoutes records, no crypto verificationNone
cloud_backendNetworkPolicy + IngestService + storageNone (in-memory) or PostgreSQL + MinIO (--features s3,postgres)

Run in order:

cargo run -p edgesentry-rs --example edge_device
cargo run -p edgesentry-rs --example edge_gateway
cargo run -p edgesentry-rs --example cloud_backend

Each example reads the output files of the previous one from /tmp/. The full sequence with real backends (requires Docker — see Interactive Demo):

cargo run -p edgesentry-rs --example edge_device
cargo run -p edgesentry-rs --example edge_gateway
cargo run -p edgesentry-rs --features s3,postgres --example cloud_backend

What the sequence demonstrates:

  • edge_device — device-side signing with build_signed_record; tampered copy written for rejection demo
  • edge_gateway — gateway receives records but does NOT verify signatures (routing-only responsibility)
  • cloud_backendNetworkPolicy::check runs before every IngestService::ingest; accepted and rejected records both visible

Sources:

  • crates/edgesentry-rs/examples/edge_device.rs
  • crates/edgesentry-rs/examples/edge_gateway.rs
  • crates/edgesentry-rs/examples/cloud_backend.rs

S3 / MinIO Switching

edgesentry-rs supports a switchable S3-compatible raw-data backend behind the s3 feature.

  • S3Backend::AwsS3: use AWS S3 (default AWS credential chain, or optional static key)
  • S3Backend::Minio: use MinIO (custom endpoint + static access key/secret)

The ingest layer is coded against a common raw-data storage abstraction, while concrete configuration selects AWS S3 or MinIO without changing ingest business logic.

Use these types from edgesentry_rs:

  • S3ObjectStoreConfig::for_aws_s3(...)
  • S3ObjectStoreConfig::for_minio(...)
  • S3CompatibleRawDataStore::new(config)

Build and test with the S3 feature enabled:

cargo test -p edgesentry-rs --features s3

To run the S3 integration tests against a live MinIO instance, set the environment variables and run the dedicated test file:

TEST_S3_ENDPOINT=http://localhost:9000 \
TEST_S3_ACCESS_KEY=minioadmin \
TEST_S3_SECRET_KEY=minioadmin \
TEST_S3_BUCKET=bucket \
cargo test -p edgesentry-rs --features s3 --test integration -- --nocapture

Tests skip automatically when any of the four TEST_S3_* variables are unset.

Interactive Local Demo

Note: unlike the library-only example, this demo requires PostgreSQL and MinIO.

Three-role model

EdgeSentry-RS is designed around three distinct roles. Understanding which role each step belongs to is key to reading the demo output correctly.

RoleResponsibilityIn this demo
Edge deviceSigns inspection records with an Ed25519 private key and emits them toward the cloudexamples/edge_device.rs
Edge gatewayForwards signed records from the device to the cloud over HTTPS / MQTT; does not verify contentexamples/edge_gateway.rs — HTTP transport is out of scope; files on disk simulate the transport
Cloud backendEnforces NetworkPolicy (CLS-06), runs IntegrityPolicyGate (route identity → signature → sequence → hash-chain), and persists accepted recordsexamples/cloud_backend.rs with --features s3,postgres

What this demo does

The script starts Docker services and then runs the three role examples in sequence:

StepRoleWhat happens
1–3InfrastructureStart PostgreSQL + MinIO via Docker Compose; wait for health checks
4Edge deviceedge_device — sign 3 records, write /tmp/eds_*.json
5Edge gatewayedge_gateway — read device output, forward unchanged to /tmp/eds_fwd_*.json
6Cloud backendcloud_backendNetworkPolicy check → IngestService → PostgreSQL + MinIO; also shows tamper rejection
7Cloud backendQuery persisted audit records and operation log from PostgreSQL
8InfrastructureStop Docker services

Prerequisites:

  • Docker / Docker Compose
  • Rust toolchain (cargo)

Run end-to-end demo:

bash scripts/local_demo.sh

The script pauses after each step and waits for Enter (or OK) before proceeding. At the end of the flow, it runs a shutdown step (docker compose -f docker-compose.local.yml down).

Running individual role examples

Each example can also be run standalone without Docker (using in-memory storage for the cloud backend):

# Step 1: edge device signs records
cargo run -p edgesentry-rs --example edge_device

# Step 2: edge gateway forwards records
cargo run -p edgesentry-rs --example edge_gateway

# Step 3a: cloud backend (in-memory — no Docker required)
cargo run -p edgesentry-rs --example cloud_backend

# Step 3b: cloud backend (PostgreSQL + MinIO — requires Docker)
cargo run -p edgesentry-rs --features s3,postgres --example cloud_backend

Each example reads the output files of the previous one from /tmp/. Run them in order.

Manual inspection

Connect to PostgreSQL after step 6:

docker exec -it edgesentry-rs-postgres psql -U trace -d trace_audit

Inside psql:

SELECT id, device_id, sequence, object_ref, ingested_at FROM audit_records ORDER BY sequence;
SELECT id, decision, device_id, sequence, message, created_at FROM operation_logs ORDER BY id;

MinIO endpoints:

  • API: http://localhost:9000
  • Console: http://localhost:9001
  • Default credentials: minioadmin / minioadmin
  • Bucket created by setup container: bucket

Manual stop local backend (only if you abort the script midway):

docker compose -f docker-compose.local.yml down

Next steps

Ready to move beyond the local demo? See the Production Deployment Guide for TLS certificate management, PostgreSQL tuning, S3/MinIO lifecycle rules, systemd service units, and horizontal scaling.

Production Deployment Guide

This guide covers moving from the local Docker Compose demo to a production-grade deployment of eds serve (HTTP/TLS) and eds serve-mqtt. For the local quickstart, see Interactive Demo. For observability, alerting, and backup/restore procedures, see Operations Runbook.


Prerequisites

ComponentMinimum versionNotes
edgesentry-rs binarycurrent mainBuilt with --features transport-http,transport-tls for HTTPS; add transport-mqtt for MQTT
PostgreSQL14Audit ledger and operation log
S3-compatible storeAWS S3, MinIO ≥ RELEASE.2023, or Cloudflare R2
(Optional) MQTT brokerMosquitto ≥ 2.0Required only for eds serve-mqtt

1 — TLS Certificate Management

# Install certbot
apt install certbot

# Issue a certificate for the ingest endpoint
certbot certonly --standalone \
  -d ingest.example.com \
  --agree-tos --non-interactive \
  -m ops@example.com

# Certificates are written to:
#   /etc/letsencrypt/live/ingest.example.com/fullchain.pem  (cert + chain)
#   /etc/letsencrypt/live/ingest.example.com/privkey.pem    (private key)

1.2 Starting eds serve-tls with TLS

eds serve-tls \
  --addr 0.0.0.0:8443 \
  --tls-cert /etc/letsencrypt/live/ingest.example.com/fullchain.pem \
  --tls-key  /etc/letsencrypt/live/ingest.example.com/privkey.pem \
  --allowed-sources 10.0.0.0/8 \
  --device lift-01=<PUBLIC_KEY_HEX>

eds serve-tls enforces TLS 1.2 minimum and TLS 1.3 preferred via rustls. No extra configuration is needed.

1.3 Certificate rotation (zero-downtime)

eds serve-tls reads the certificate files at startup only. For rotation without downtime:

# 1. Renew the certificate
certbot renew --quiet

# 2. Send SIGTERM to the running process (systemd handles restart)
systemctl reload edgesentry
# — or, without systemd —
kill -TERM $(pidof eds)
# Process exits cleanly; supervisor / systemd restarts it and picks up the new cert

Add a cron/systemd timer to automate renewal:

# /etc/systemd/system/certbot.timer
[Timer]
OnCalendar=weekly
Persistent=true

[Install]
WantedBy=timers.target
systemctl enable --now certbot.timer

1.4 Self-signed certificates (internal / air-gapped deployments)

# Generate a 10-year self-signed certificate
openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 \
  -nodes -keyout server.key -out server.crt \
  -subj "/CN=ingest.internal" \
  -addext "subjectAltName=IP:10.0.1.5,DNS:ingest.internal"

Distribute server.crt to all edge devices as the trusted CA.


2 — PostgreSQL: Schema, Indexes, and Connection Sizing

2.1 Schema migration

The schema is in db/init/001_schema.sql. Apply it against your production database:

psql "$DATABASE_URL" -f db/init/001_schema.sql

The schema is idempotent (CREATE TABLE IF NOT EXISTS) and safe to re-run.

The base schema ships with a UNIQUE (device_id, sequence) constraint which doubles as a B-tree index and rejects replay attacks at the database level. Add the following indexes for common query patterns:

-- Fast lookup of the latest record per device (chain-head queries)
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_audit_device_seq
    ON audit_records (device_id, sequence DESC);

-- Time-range queries for compliance reporting
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_audit_ingested_at
    ON audit_records (ingested_at);

-- Operation log filtering by decision type
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_oplog_decision_device
    ON operation_logs (decision, device_id, created_at DESC);

CONCURRENTLY means these can be created without locking the table in production.

2.3 Connection pool sizing

PostgresAuditLedger and PostgresOperationLog each open one synchronous connection via the postgres crate. For multi-node deployments (see §5) each eds process holds two connections. Set max_connections in postgresql.conf to accommodate:

max_connections = 2 × <number of eds instances> + 10   # headroom for psql, monitoring

For high ingest rates (> 500 records/s), replace the sync backends with an async connection pool (e.g. sqlx + PgPool) as a custom AsyncAuditLedger implementation.

2.4 Partitioning for long-term retention

Partition audit_records by ingested_at when the table is expected to exceed 100 M rows:

-- Convert to range-partitioned table (run once, before data accumulates)
CREATE TABLE audit_records_new (LIKE audit_records INCLUDING ALL)
    PARTITION BY RANGE (ingested_at);

CREATE TABLE audit_records_2026_q1
    PARTITION OF audit_records_new
    FOR VALUES FROM ('2026-01-01') TO ('2026-04-01');

-- Attach, swap, drop
ALTER TABLE audit_records RENAME TO audit_records_old;
ALTER TABLE audit_records_new RENAME TO audit_records;
DROP TABLE audit_records_old;

3 — Object Storage: Bucket Policy and Lifecycle Rules

3.1 AWS S3 — bucket policy (least privilege)

Create a dedicated IAM role for the ingest service with write-only access:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "IngestWriteOnly",
      "Effect": "Allow",
      "Action": ["s3:PutObject"],
      "Resource": "arn:aws:s3:::edgesentry-audit/*"
    },
    {
      "Sid": "ListBucket",
      "Effect": "Allow",
      "Action": ["s3:ListBucket"],
      "Resource": "arn:aws:s3:::edgesentry-audit"
    }
  ]
}

Attach a separate read-only role to compliance auditors.

3.2 Lifecycle rules (retention + cost management)

{
  "Rules": [
    {
      "Id": "TransitionToIA",
      "Status": "Enabled",
      "Filter": { "Prefix": "" },
      "Transitions": [
        { "Days": 90,  "StorageClass": "STANDARD_IA" },
        { "Days": 365, "StorageClass": "GLACIER_IR" }
      ]
    },
    {
      "Id": "ExpireOldObjects",
      "Status": "Enabled",
      "Filter": { "Prefix": "" },
      "Expiration": { "Days": 2555 }
    }
  ]
}

Apply via CLI:

aws s3api put-bucket-lifecycle-configuration \
  --bucket edgesentry-audit \
  --lifecycle-configuration file://lifecycle.json

3.3 MinIO (on-premises)

# Create bucket with object locking (immutability for compliance)
mc mb --with-lock minio/edgesentry-audit

# Set lifecycle: transition to cheaper tier after 90 days
mc ilm import minio/edgesentry-audit <<EOF
{
  "Rules": [{
    "ID": "expire-3-years",
    "Status": "Enabled",
    "Expiration": { "Days": 1095 }
  }]
}
EOF

# Server-side encryption at rest
mc encrypt set sse-s3 minio/edgesentry-audit

4 — Process Management

4.1 systemd service unit (HTTP + TLS)

# /etc/systemd/system/edgesentry.service
[Unit]
Description=EdgeSentry-RS ingest server
After=network-online.target postgresql.service
Wants=network-online.target

[Service]
Type=exec
User=edgesentry
Group=edgesentry
ExecStart=/usr/local/bin/eds serve-tls \
    --addr 0.0.0.0:8443 \
    --tls-cert /etc/edgesentry/server.crt \
    --tls-key  /etc/edgesentry/server.key \
    --allowed-sources 10.0.0.0/8 \
    --device lift-01=<PUBLIC_KEY_HEX>
Restart=on-failure
RestartSec=5
Environment=RUST_LOG=edgesentry_rs=info

# Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/log/edgesentry
PrivateTmp=true
CapabilityBoundingSet=

[Install]
WantedBy=multi-user.target
# Install and start
install -m 755 target/release/eds /usr/local/bin/eds
useradd --system --no-create-home edgesentry
mkdir -p /var/log/edgesentry && chown edgesentry:edgesentry /var/log/edgesentry

systemctl daemon-reload
systemctl enable --now edgesentry
systemctl status edgesentry

4.2 systemd service unit (MQTT)

# /etc/systemd/system/edgesentry-mqtt.service
[Unit]
Description=EdgeSentry-RS MQTT ingest subscriber
After=network-online.target mosquitto.service
Wants=network-online.target

[Service]
Type=exec
User=edgesentry
Group=edgesentry
ExecStart=/usr/local/bin/eds serve-mqtt \
    --broker 10.0.1.10 \
    --port 1883 \
    --topic edgesentry/ingest \
    --client-id eds-prod-1 \
    --device lift-01=<PUBLIC_KEY_HEX>
Restart=on-failure
RestartSec=10
Environment=RUST_LOG=edgesentry_rs=info

[Install]
WantedBy=multi-user.target

4.3 Health check

eds serve does not expose a /health endpoint itself — wire a TCP check in your load balancer or monitoring agent:

# Confirm the TLS port is accepting connections
openssl s_client -connect ingest.example.com:8443 -verify_return_error </dev/null
echo $?   # 0 = healthy

For Kubernetes, use a tcpSocket liveness probe:

livenessProbe:
  tcpSocket:
    port: 8443
  initialDelaySeconds: 5
  periodSeconds: 15

5 — Horizontal Scaling

5.1 Architecture

                      ┌─────────────────┐
Edge devices  ──TLS──►│  Load balancer  │
                      │  (e.g. nginx /  │
                      │   AWS ALB)      │
                      └────────┬────────┘
                               │  Round-robin
                ┌──────────────┼──────────────┐
                ▼              ▼              ▼
         ┌────────────┐ ┌────────────┐ ┌────────────┐
         │  eds serve │ │  eds serve │ │  eds serve │
         │  node 1    │ │  node 2    │ │  node 3    │
         └──────┬─────┘ └──────┬─────┘ └──────┬─────┘
                └──────────────┼──────────────┘
                               │
                ┌──────────────┼──────────────┐
                ▼              ▼              ▼
         ┌─────────┐    ┌──────────┐   ┌─────────┐
         │Postgres │    │  S3 /    │   │ MinIO   │
         │(primary)│    │  bucket  │   │ cluster │
         └─────────┘    └──────────┘   └─────────┘

5.2 Key properties

  • IngestState is per-process. Each eds serve node maintains its own in-memory sequence/hash-chain state. The UNIQUE (device_id, sequence) constraint in PostgreSQL is the cross-node replay fence — a duplicate insert raises a unique-violation error that PostgresAuditLedger surfaces as a store error, causing the ingest to be rejected and logged.
  • No sticky sessions required. Sequence enforcement happens at the DB level; any node can handle any device’s request.
  • S3/MinIO writes are stateless. All nodes write to the same bucket; object keys are derived from object_ref, which is set by the edge device and globally unique by convention (e.g. <device_id>/<sequence>.bin).

5.3 nginx TLS termination + upstream proxy

upstream edgesentry_nodes {
    least_conn;
    server 10.0.1.11:8080;
    server 10.0.1.12:8080;
    server 10.0.1.13:8080;
}

server {
    listen 443 ssl;
    server_name ingest.example.com;

    ssl_certificate     /etc/letsencrypt/live/ingest.example.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/ingest.example.com/privkey.pem;
    ssl_protocols       TLSv1.2 TLSv1.3;
    ssl_ciphers         HIGH:!aNULL:!MD5;

    location /api/v1/ingest {
        proxy_pass         http://edgesentry_nodes;
        proxy_set_header   X-Forwarded-For $remote_addr;
        proxy_read_timeout 10s;
    }
}

Run eds serve on each node (plain HTTP on a private port) and let nginx handle TLS termination. Pass --allowed-sources with the nginx upstream IP range. Use eds serve-tls instead if you prefer built-in TLS without a reverse proxy.

Note: When TLS is terminated at the load balancer, eds serve sees the LB’s IP rather than the device’s IP. Set --allowed-sources to the LB’s internal address range, and rely on the LB’s own allowlist for per-device source control.

5.4 PostgreSQL read replica for reporting

Write path (ingest): primary only. Read path (compliance queries, chain verification): direct to read replica.

# Read replica connection for compliance tooling
psql "postgres://audit_ro:pass@pg-replica:5432/audit?sslmode=require"

6 — Observability

Structured logging and tracing are handled by the tracing facade. See the Operations Runbook — Observability section for the full setup including JSON log format, structured event fields emitted by the library, Prometheus metric derivation, and OpenTelemetry span configuration.

Quick-start: JSON logs to stdout (for Loki / CloudWatch)

# Cargo.toml of your binary wrapper
tracing-subscriber = { version = "0.3", features = ["env-filter", "json"] }
# Run eds with JSON logs
RUST_LOG=edgesentry_rs=info eds serve ... 2>&1 | \
  promtail --stdin --client.url http://loki:3100/loki/api/v1/push

Key log fields to alert on

FieldValueAlert condition
message"MQTT record rejected" / "record rejected"Rejection rate > 1 % over 5 min
reason"invalid signature"Any occurrence — possible tamper attempt
reason"unknown device"Sustained — unregistered device probing
message"MQTT event loop error"Any — broker connectivity lost

See Operations Runbook — Alert Definitions for Prometheus alerting rules.


See Also

CLI Reference

eds is the unified EdgeSentry CLI. All audit commands live under the eds audit subcommand; scan inspection commands live under eds inspect.

eds audit <command>    — tamper-evident audit record operations
eds inspect <command>  — 3D scan vs. IFC deviation and AI detection pipeline

Installation

For end users — Homebrew (macOS / Linux)

brew install edgesentry/tap/eds

For end users — pre-built binary

Download the latest release from the GitHub Releases page.

PlatformFile
Linux (x86-64)eds-{version}-x86_64-unknown-linux-gnu.tar.gz
macOS (Apple Silicon)eds-{version}-aarch64-apple-darwin.tar.gz
Windows (x86-64)eds-{version}-x86_64-pc-windows-msvc.zip

Extract and place the eds binary on your PATH:

# Linux / macOS
tar -xzf eds-{version}-{target}.tar.gz
sudo mv eds /usr/local/bin/
eds --help
# Windows (PowerShell)
Expand-Archive eds-{version}-x86_64-pc-windows-msvc.zip
# Move eds.exe to a directory in your PATH
eds --help

For developers — install from source

Requires Rust (stable toolchain).

cargo install --git https://github.com/edgesentry/edgesentry-rs --locked --bin eds

To include optional transport features at install time:

cargo install --git https://github.com/edgesentry/edgesentry-rs --locked --bin eds \
  --features transport-http,transport-tls

Verify the installation:

eds --version
eds --help

Device Provisioning

Generate a fresh Ed25519 keypair for a new device:

eds audit keygen

Save directly to a file:

eds audit keygen --out device-lift-01.key.json

Derive the public key from an existing private key:

eds audit inspect-key \
  --private-key-hex 0101010101010101010101010101010101010101010101010101010101010101

See Key Management for the full provisioning and rotation workflow.


CLI Usage

Show help:

eds --help
eds audit --help

Create a signed record and save it to record1.json:

eds audit sign-record \
  --device-id lift-01 \
  --sequence 1 \
  --timestamp-ms 1700000000000 \
  --payload "door-open" \
  --object-ref "s3://bucket/lift-01/1.bin" \
  --private-key-hex 0101010101010101010101010101010101010101010101010101010101010101 \
  --out record1.json

Verify one record signature:

eds audit verify-record \
  --record-file record1.json \
  --public-key-hex 8a88e3dd7409f195fd52db2d3cba5d72ca6709bf1d94121bf3748801b40f6f5c

Verify a whole chain from a JSON array file:

eds audit verify-chain --records-file records.json

Lift Inspection Scenario (CLI End-to-End)

This scenario simulates a remote lift inspection with three checks:

  1. Door open/close cycle check
  2. Vibration check
  3. Emergency brake response check

1) Generate a full signed chain for one inspection session

eds audit demo-lift-inspection \
  --device-id lift-01 \
  --out-file lift_inspection_records.json

Expected output:

DEMO_CREATED:lift_inspection_records.json
CHAIN_VALID

2) Verify chain integrity from file

eds audit verify-chain --records-file lift_inspection_records.json

Expected output:

CHAIN_VALID

2.1) Tamper with the chain file and confirm detection

Modify the first record hash value in-place:

python3 - <<'PY'
import json

path = "lift_inspection_records.json"
with open(path, "r", encoding="utf-8") as f:
  records = json.load(f)

records[0]["payload_hash"][0] ^= 0x01

with open(path, "w", encoding="utf-8") as f:
  json.dump(records, f, indent=2)
print("tampered", path)
PY

Run chain verification again:

eds audit verify-chain --records-file lift_inspection_records.json

Expected result: command exits with a non-zero code and prints an error such as chain verification failed: invalid previous hash ....

3) Create and verify a single signed inspection event

Generate one signed event:

eds audit sign-record \
  --device-id lift-01 \
  --sequence 1 \
  --timestamp-ms 1700000000000 \
  --payload "scenario=lift-inspection,check=door,status=ok" \
  --object-ref "s3://bucket/lift-01/door-check-1.bin" \
  --private-key-hex 0101010101010101010101010101010101010101010101010101010101010101 \
  --out lift_single_record.json

Verify signature:

eds audit verify-record \
  --record-file lift_single_record.json \
  --public-key-hex 8a88e3dd7409f195fd52db2d3cba5d72ca6709bf1d94121bf3748801b40f6f5c

Expected output:

VALID

3.1) Tamper with a single record signature and confirm rejection

Modify one signature byte:

python3 - <<'PY'
import json

path = "lift_single_record.json"
with open(path, "r", encoding="utf-8") as f:
  record = json.load(f)

record["signature"][0] ^= 0x01

with open(path, "w", encoding="utf-8") as f:
  json.dump(record, f, indent=2)
print("tampered", path)
PY

Verify signature again:

eds audit verify-record \
  --record-file lift_single_record.json \
  --public-key-hex 8a88e3dd7409f195fd52db2d3cba5d72ca6709bf1d94121bf3748801b40f6f5c

Expected output:

INVALID

Server Commands

eds audit serve — HTTP ingest server

Requires the transport-http Cargo feature.

FlagDefaultDescription
--addr0.0.0.0:8080Socket address to bind
--allowed-sources127.0.0.1Comma-separated CIDRs / IPs allowed to connect
--device ID=PUBKEY_HEX(none)Register a device; repeat for multiple devices
eds audit serve \
  --addr 0.0.0.0:8080 \
  --allowed-sources 10.0.0.0/8 \
  --device lift-01=<PUBLIC_KEY_HEX>

Plain HTTP on port 8080. Use behind a TLS-terminating reverse proxy, or use eds audit serve-tls for built-in TLS.


eds audit serve-tls — HTTPS ingest server (TLS 1.2/1.3)

Requires the transport-tls Cargo feature.

FlagDefaultDescription
--addr0.0.0.0:8443Socket address to bind
--allowed-sources127.0.0.1Comma-separated CIDRs / IPs allowed to connect
--device ID=PUBKEY_HEX(none)Register a device; repeat for multiple devices
--tls-cert(required)Path to PEM certificate chain (leaf first)
--tls-key(required)Path to PEM private key (PKCS #8 or PKCS #1 RSA)
eds audit serve-tls \
  --addr 0.0.0.0:8443 \
  --allowed-sources 10.0.0.0/8 \
  --device lift-01=<PUBLIC_KEY_HEX> \
  --tls-cert /etc/edgesentry/server.crt \
  --tls-key  /etc/edgesentry/server.key

Uses rustls TLS 1.2/1.3. Network policy (IP allowlist) is enforced at TCP accept time, before the TLS handshake.


eds audit serve-mqtt — MQTT ingest subscriber

Requires the transport-mqtt Cargo feature. Optionally add transport-mqtt-tls for MQTTS.

FlagDefaultDescription
--brokerlocalhostMQTT broker host
--port1883MQTT broker port (use 8883 for MQTTS)
--topicedgesentry/ingestTopic to subscribe for ingest records
--client-ideds-serverMQTT client identifier
--device ID=PUBKEY_HEX(none)Register a device; repeat for multiple devices
--tls-ca-cert(none)Path to PEM CA cert for MQTTS broker verification (transport-mqtt-tls only)
# Plain MQTT (port 1883)
eds audit serve-mqtt \
  --broker broker.example.com \
  --port 1883 \
  --topic edgesentry/ingest \
  --device lift-01=<PUBLIC_KEY_HEX>

# MQTTS (port 8883, requires transport-mqtt-tls feature)
eds audit serve-mqtt \
  --broker broker.example.com \
  --port 8883 \
  --tls-ca-cert /etc/edgesentry/ca.crt \
  --device lift-01=<PUBLIC_KEY_HEX>

Responses are published on <topic>/response as JSON with status: "accepted" or status: "rejected".


Ingestion Demo (PostgreSQL + MinIO)

Requires the s3 and postgres Cargo features and a running PostgreSQL + MinIO instance (use docker compose -f docker-compose.local.yml up -d).

1) Generate a chain with payloads file

eds audit demo-lift-inspection \
  --device-id lift-01 \
  --out-file lift_inspection_records.json \
  --payloads-file lift_inspection_payloads.json

2) Ingest records through IngestService

eds audit demo-ingest \
  --records-file lift_inspection_records.json \
  --payloads-file lift_inspection_payloads.json \
  --device-id lift-01 \
  --pg-url postgresql://trace:trace@localhost:5433/trace_audit \
  --minio-endpoint http://localhost:9000 \
  --minio-bucket bucket \
  --minio-access-key minioadmin \
  --minio-secret-key minioadmin \
  --reset

--reset truncates audit_records and operation_logs before ingesting. Omit it to append to an existing run.

Pass --tampered-records-file <path> to also demonstrate rejection of a tampered chain through the same IngestService.

See Interactive Demo for the full guided walkthrough with PostgreSQL and MinIO.

Key Management

This page covers the full lifecycle of Ed25519 device keys used by EdgeSentry-RS: key generation, secure storage, public key registration, and rotation.

Relevant standards: Singapore CLS-04 / ETSI EN 303 645 §5.4 / JC-STAR STAR-1 R1.2.


1. Key Generation

Generate a fresh Ed25519 keypair with the eds CLI:

eds audit keygen

Example output:

{
  "private_key_hex": "ddca9848801c658d62a010c4d306d6430a0cdc2c383add1628859258e3acfb93",
  "public_key_hex": "4bb158f302c0ad9261c0acfa95e17144ae7249eb0973bbfaeae4501165887a77"
}

Save to a file:

eds audit keygen --out device-lift-01.key.json

Each device must have a unique keypair. Never reuse keys across devices.


2. Deriving the Public Key from an Existing Private Key

If you already have a private_key_hex and need to confirm the matching public key:

eds audit inspect-key --private-key-hex <64-hex-char-private-key>

Example:

eds audit inspect-key \
  --private-key-hex 0101010101010101010101010101010101010101010101010101010101010101

Output:

{
  "private_key_hex": "0101010101010101010101010101010101010101010101010101010101010101",
  "public_key_hex": "8a88e3dd7409f195fd52db2d3cba5d72ca6709bf1d94121bf3748801b40f6f5c"
}

3. Secure Private Key Storage

The private key must be kept secret on the device. Recommended practices:

EnvironmentRecommended storage
Development / CIEnvironment variable (DEVICE_PRIVATE_KEY_HEX) — never commit to version control
Production (software)Encrypted secrets store (e.g., HashiCorp Vault, AWS Secrets Manager, Azure Key Vault)
Production (hardware)Hardware Security Module (HSM) or Trusted Execution Environment (TEE) — see #54 for the planned HSM path

File-based storage (development only):

chmod 600 device-lift-01.key.json

Never expose private_key_hex in logs, HTTP responses, or error messages.


4. Registering the Public Key (Cloud Side)

After generating a keypair, register the device’s public key in IntegrityPolicyGate before any records are ingested:

#![allow(unused)]
fn main() {
use edgesentry_rs::{IntegrityPolicyGate, parse_fixed_hex};
use ed25519_dalek::VerifyingKey;

let public_key_bytes = parse_fixed_hex::<32>(&public_key_hex)?;
let verifying_key = VerifyingKey::from_bytes(&public_key_bytes)?;

let mut gate = IntegrityPolicyGate::new();
gate.register_device("lift-01", verifying_key);
}

The device_id string passed to register_device must exactly match the device_id field in every AuditRecord signed by that device.

Any record from an unknown device_id is rejected with IngestError::UnknownDevice.


5. Key Rotation

Rotate a device key when:

  • The private key may have been exposed
  • The device is being decommissioned and reprovisioned
  • Your security policy requires periodic rotation

Rotation procedure:

  1. Generate a new keypair on or for the new device configuration:

    eds audit keygen --out device-lift-01-v2.key.json
    
  2. Register the new public key alongside the old one (the gate allows multiple keys per device_id is not yet supported — register under a new device_id such as lift-01-v2 during the transition window).

  3. Update the device to sign new records with the new private key and the new device_id.

  4. Once all in-flight records signed with the old key have been ingested and verified, remove the old device registration from the policy gate.

  5. Securely delete or revoke the old private key from all storage locations.

Note: Multi-key-per-device support (allowing old and new keys simultaneously under the same device_id) is tracked in #57.


6. Software Update Publisher Keys

Software update verification uses a separate set of Ed25519 keys from device signing keys. A publisher key belongs to the entity that signs firmware or software packages; a device signing key belongs to the individual device that signs audit records. Never mix these roles.

6.1 Key generation and storage

Generate a publisher keypair the same way as a device keypair:

eds audit keygen --out publisher-acme-firmware.key.json

The private key must be kept in a high-security offline environment (HSM, air-gapped workstation, or a secrets manager with strict access control). It is used only at build time to sign a release artifact, never on the device itself.

The public key is embedded in the device firmware image at manufacture time and loaded into UpdateVerifier at runtime:

#![allow(unused)]
fn main() {
use edgesentry_rs::update::UpdateVerifier;
use ed25519_dalek::VerifyingKey;

let public_key_bytes: [u8; 32] = /* bytes baked into firmware */;
let verifying_key = VerifyingKey::from_bytes(&public_key_bytes)?;

let mut verifier = UpdateVerifier::new();
verifier.register_publisher("acme-firmware", verifying_key);
}

6.2 One publisher ID per key

Register each key under a distinct publisher_id. Avoid registering the same key under multiple IDs or multiple keys under the same ID unless your threat model explicitly requires it.

#![allow(unused)]
fn main() {
// Correct: one key per publisher
verifier.register_publisher("acme-firmware", firmware_key);
verifier.register_publisher("acme-config",   config_key);

// Avoid: same key shared across publishers — a signature from one
// package type could be accepted for the other
verifier.register_publisher("acme-firmware", shared_key); // ⚠
verifier.register_publisher("acme-config",   shared_key); // ⚠
}

6.3 Key confusion attacks

A key confusion attack occurs when a signature produced for one package type is submitted as a valid signature for another. UpdateVerifier prevents this because:

  1. The caller passes an explicit publisher_id to verify().
  2. The verifier looks up the key registered under that exact ID.
  3. A signature by acme-config’s key will not verify under acme-firmware’s key.

This only holds when each publisher has a unique key. If keys are shared across publishers (see §6.2), the isolation breaks.

6.4 Publisher key rotation

Rotate a publisher key when the private key may have been exposed or your security policy requires periodic rotation.

  1. Generate a new keypair offline.
  2. Sign the next firmware release with the new private key.
  3. Distribute a firmware update that embeds the new public key and calls register_publisher with the new key. Include both old and new keys during the transition window so devices on either firmware version can verify updates.
  4. After all devices have moved to the new firmware, remove the old key registration.
  5. Securely destroy the old private key.

6.5 FFI (C/C++ devices)

For devices integrating via the C/C++ FFI bridge, publisher key verification will be exposed as eds_verify_update (tracked in #80). Until that function is available, C/C++ devices must call into Rust via a thin wrapper or handle publisher verification at the application layer.

The public key bytes to pass to eds_verify_update are the same 32-byte Ed25519 public key described above — provision them into the device at manufacture time, stored in a read-only flash region or secure element.


7. HSM Path (CLS Level 4)

For CLS Level 4 and high-assurance deployments, private keys should never exist as extractable byte arrays. Instead, signing operations should be performed inside an HSM or TEE, with the private key material never leaving the secure boundary.

The planned edgesentry-bridge C/C++ FFI layer (#53) and HSM integration (#54) will provide a signing interface that delegates the Ed25519 sign operation to an HSM-backed provider without exposing the raw key bytes to application code.

C/C++ FFI Bridge

edgesentry-bridge is a separate Rust crate that exposes Ed25519 signing and BLAKE3 hash-chain verification as a stable C ABI. C and C++ firmware or gateways can call the same security logic as the Rust library without a full rewrite.


Building the library

cargo build -p edgesentry-bridge --release

This produces:

PlatformFile
macOStarget/release/libedgesentry_bridge.dylib and .a
Linuxtarget/release/libedgesentry_bridge.so and .a

The header crates/edgesentry-bridge/include/edgesentry_bridge.h is regenerated automatically by build.rs using cbindgen.


Linking from C/C++

macOS:

cc -o my_app main.c \
   -I path/to/edgesentry-bridge/include \
   -L path/to/target/release \
   -ledgesentry_bridge \
   -framework Security -framework CoreFoundation

Linux:

cc -o my_app main.c \
   -I path/to/edgesentry-bridge/include \
   -L path/to/target/release \
   -ledgesentry_bridge \
   -lpthread -ldl

A ready-made Makefile is provided in crates/edgesentry-bridge/examples/c_integration/.


API reference

Error codes

ConstantValueMeaning
EDS_OK0Success
EDS_ERR_NULL_PTR-1A required pointer was NULL
EDS_ERR_INVALID_UTF8-2String argument is not valid UTF-8
EDS_ERR_INVALID_KEY-3Key or hash buffer is invalid
EDS_ERR_STRING_TOO_LONG-4String exceeds fixed buffer size
EDS_ERR_CHAIN_INVALID-5Hash-chain verification failed
EDS_ERR_PANIC-6Unexpected internal error
EDS_ERR_HASH_MISMATCH-7Payload hash does not match expected value
EDS_ERR_BAD_SIGNATURE-8Ed25519 signature is invalid

After any call that returns a negative error code, call eds_last_error_message() to retrieve a human-readable description of the failure.

Record struct

typedef struct {
    uint64_t sequence;           /* monotonic record index (starts at 1) */
    uint64_t timestamp_ms;       /* Unix epoch in milliseconds           */
    uint8_t  payload_hash[32];   /* BLAKE3 hash of the raw payload        */
    uint8_t  signature[64];      /* Ed25519 signature over payload_hash   */
    uint8_t  prev_record_hash[32]; /* hash of preceding record (zero for first) */
    uint8_t  device_id[256];     /* null-terminated device identifier     */
    uint8_t  object_ref[512];    /* null-terminated storage reference     */
} EdsAuditRecord;

EdsAuditRecord is caller-allocated. Rust never calls malloc or returns a heap pointer — no _free function is needed.

Functions

/* Generate an Ed25519 keypair via OS CSPRNG.
   private_key_out and public_key_out must each point to 32 bytes. */
int32_t eds_keygen(uint8_t *private_key_out, uint8_t *public_key_out);

/* Hash payload with BLAKE3, sign with Ed25519, fill *out.
   Pass NULL for prev_record_hash to use the zero hash (first record). */
int32_t eds_sign_record(const char    *device_id,
                        uint64_t       sequence,
                        uint64_t       timestamp_ms,
                        const uint8_t *payload,
                        size_t         payload_len,
                        const uint8_t *prev_record_hash,
                        const char    *object_ref,
                        const uint8_t *private_key,
                        EdsAuditRecord *out);

/* Compute the per-record hash (used as prev_record_hash for the next record).
   hash_out must point to 32 bytes. */
int32_t eds_record_hash(const EdsAuditRecord *record, uint8_t *hash_out);

/* Verify Ed25519 signature. Returns 1 valid, 0 invalid, negative on error. */
int32_t eds_verify_record(const EdsAuditRecord *record,
                          const uint8_t *public_key);

/* Verify the entire hash chain. Returns EDS_OK or EDS_ERR_CHAIN_INVALID. */
int32_t eds_verify_chain(const EdsAuditRecord *records, size_t count);

/* Verify a software update before installation (CLS-03 / STAR-2 R2.2).
   Checks BLAKE3(payload) == payload_hash, then verifies the Ed25519
   publisher signature over payload_hash.
   payload_hash must point to 32 bytes; signature to 64 bytes;
   publisher_key to 32 bytes.
   Returns EDS_OK, EDS_ERR_HASH_MISMATCH, EDS_ERR_BAD_SIGNATURE, or
   EDS_ERR_INVALID_KEY / EDS_ERR_NULL_PTR on bad inputs. */
int32_t eds_verify_update(const uint8_t *payload,
                          size_t         payload_len,
                          const uint8_t *payload_hash,
                          const uint8_t *signature,
                          const uint8_t *publisher_key);

/* Return a thread-local human-readable description of the last error.
   The pointer is valid until the next eds_* call on this thread.
   Returns "" when no error has occurred.  Never returns NULL. */
const char *eds_last_error_message(void);

Minimal C example

#include "edgesentry_bridge.h"
#include <string.h>
#include <assert.h>

int main(void) {
    uint8_t priv_key[32], pub_key[32];
    if (eds_keygen(priv_key, pub_key) != EDS_OK) {
        fprintf(stderr, "keygen failed: %s\n", eds_last_error_message());
        return 1;
    }

    const char *payload = "check=door,status=ok";
    EdsAuditRecord rec;
    memset(&rec, 0, sizeof(rec));

    int rc = eds_sign_record("lift-01", 1, 1700000000000ULL,
                             (const uint8_t *)payload, strlen(payload),
                             NULL,              /* zero hash — first record */
                             "lift-01/1.bin",
                             priv_key, &rec);
    if (rc != EDS_OK) {
        fprintf(stderr, "sign_record failed: %s\n", eds_last_error_message());
        return 1;
    }

    assert(eds_verify_record(&rec, pub_key) == 1);
    return 0;
}

See the full example in crates/edgesentry-bridge/examples/c_integration/main.c.


Memory safety conventions

RuleDetail
No heap allocationEdsAuditRecord is caller-allocated; Rust never calls malloc
NULL-checkedEvery pointer argument is checked; EDS_ERR_NULL_PTR returned on failure
Fixed-size stringsdevice_id max 255 chars; object_ref max 511 chars — truncated inputs return EDS_ERR_STRING_TOO_LONG
Panic safetystd::panic::catch_unwind wraps every FFI function; a Rust panic returns EDS_ERR_PANIC instead of unwinding across the C boundary
Key sizesprivate_key and public_key must point to exactly 32 bytes; hash buffers to 32 bytes; signature buffer to 64 bytes

HSM path

For CLS Level 4, the private key should never exist as an extractable byte array. The planned HSM integration (#54) will delegate the eds_sign_record operation to an HSM-backed provider without exposing key bytes to the caller.

Contributing

Consistency Check

After every change — whether to code, tests, scripts, or docs — check that all three layers stay in sync:

  1. Code → Docs: If you add, remove, or rename a module, function, CLI command, or behavior, update all docs that reference it (concepts.md, architecture.md, cli.md, quickstart.md, demo.md, traceability.md).
  2. Docs → Code: If a doc describes a feature or command, verify it exists and works as described. Stale examples and wrong test target names cause CI failures.
  3. Scripts → Code: If you rename a test file or cargo feature, update every script and workflow that references it (e.g. scripts/integration_test.sh, .github/workflows/).
  4. Traceability: If you implement or change a compliance control, update the status in docs/src/traceability.md (✅ / ⚠️ / 🔲).

A quick grep before opening a PR:

# Find docs that mention a symbol you changed
grep -r "<old-name>" docs/ scripts/ .github/

Issue Labels

Every issue should carry one type label, one priority label, and one or more category labels.

Type labels

LabelWhen to use
bugSomething is broken or behaves incorrectly
enhancementNew feature or improvement to existing behavior
documentationDocs-only change — no production code affected

Priority labels

LabelMeaningExamples
priority:P0Must-have — directly required to satisfy a target standard (CLS, JC-STAR, CRA). Work is blocked until resolved.Broken signature verification, missing hash-chain link, failing integrity gate
priority:P1Good-to-have — strengthens compliance posture or developer experience but is not a hard blocker for standard conformance.Key rotation tooling, CI hardening, traceability matrix, FFI bridge
priority:P2Best-effort — stretch goals, nice-to-haves, or anything that requires dedicated hardware. Pursue if capacity allows.HSM integration, education white papers, reference architectures

When in doubt, ask: “Does the standard explicitly require this?” If yes → P0. Otherwise, if it helps but is not mandated → P1. For stretch goals, nice additions, or hardware-dependent work → P2.

Category labels

LabelWhen to use
coreCore security controls — signing, hashing, integrity gate, ingest pipeline
compliance-governanceCompliance evidence, traceability matrices, disclosure processes
devsecopsCI/CD pipelines, supply-chain security, static analysis, audit tooling
platform-operationsInfrastructure, deployment, operational readiness
hardware-neededRequires physical hardware or hardware-backed infrastructure (always pair with priority:P2)

Pull Request Conventions

When creating a pull request, always assign it to the user who authored the branch:

gh pr create --assignee "@me" --title "..." --body "..."

Mandatory: Run Tests After Every Code Change

After every code change, run:

cargo test --workspace

Do not consider a change complete until all tests pass.

Unit Tests

Prerequisites (macOS)

Install the Rust tool chain first:

brew install rustup-init
rustup-init -y
source "$HOME/.cargo/env"
rustup default stable

Install cargo-deny (required for OSS license checks):

cargo install cargo-deny
source "$HOME/.cargo/env"
cargo deny --version

Running Tests

Run all unit tests:

cargo test --workspace

Run tests for a specific crate:

cargo test -p edgesentry-rs

Run the edgesentry-rs crate with the S3-compatible backend feature enabled:

cargo test -p edgesentry-rs --features s3

Run S3 integration tests against a live MinIO instance (requires the env vars below to be set):

TEST_S3_ENDPOINT=http://localhost:9000 \
TEST_S3_ACCESS_KEY=minioadmin \
TEST_S3_SECRET_KEY=minioadmin \
TEST_S3_BUCKET=bucket \
cargo test -p edgesentry-rs --features s3 --test integration -- --nocapture

Tests skip automatically when any of the four TEST_S3_* variables are unset.

Run unit tests + OSS license checks in one command:

./scripts/run_unit_and_license_check.sh

Static Analysis and OSS License Check

Use the following checks before release.

1) Static analysis (clippy)

cargo clippy --workspace --all-targets --all-features -- -D warnings

2) Dependency security advisory check (cargo-audit)

Install once:

cargo install cargo-audit

Run:

cargo audit

3) Commercial-use OSS license check (cargo-deny)

Install once:

cargo install cargo-deny

Run license check (policy in deny.toml):

cargo deny check licenses

Optional full dependency policy check:

cargo deny check advisories bans licenses sources

If this check fails, inspect violating crates and update dependencies or the policy only after legal/security review.


Avoiding Conflicts with Main

Conflicts occur when a feature branch diverges from main while main receives other merged PRs that touch the same files. The highest-conflict files in this repo are scripts/local_demo.sh, docs/src/demo.md, and .github/copilot-instructions.md.

Before starting work

git fetch origin
git checkout main && git pull origin main
git checkout -b <your-branch>

Keep your branch up to date — rebase onto main regularly, especially before opening a PR:

git fetch origin
git rebase origin/main

Resolving a conflict during rebase

  1. Identify conflicted files: git diff --name-only --diff-filter=U
  2. For each file, decide which side to keep:
    • Take your version: git checkout --theirs <file>
    • Take main’s version: git checkout --ours <file>
    • Merge manually: edit the file to remove <<<<<<< / ======= / >>>>>>> markers
  3. Stage the resolved file: git add <file>
  4. Continue: GIT_EDITOR=true git rebase --continue
  5. If a conflict recurs on the next commit, repeat from step 1.

After resolving, force-push the rebased branch:

git push --force-with-lease origin <your-branch>

Files most likely to conflict — coordinate before editing these:

FileWhy it conflicts often
scripts/local_demo.shMultiple PRs add steps or restructure the demo flow
docs/src/demo.mdMirrors demo script changes
.github/copilot-instructions.mdStructure section updated whenever new modules or examples are added
crates/edgesentry-rs/examples/lift_inspection_flow.rsTouched by both quickstart improvements and role-boundary work

Build and Release

Build Release Artifacts

cargo build --workspace --release

Build a specific crate only:

cargo build -p edgesentry-rs --release

Publish to crates.io

  1. Validate quality gates first:
./scripts/run_unit_and_license_check.sh
cargo clippy --workspace --all-targets --all-features -- -D warnings
  1. Login once:
cargo login <CRATES_IO_TOKEN>
  1. Dry-run publish:
cargo publish --dry-run -p edgesentry-rs
  1. Publish:
cargo publish -p edgesentry-rs

GitHub Actions Release Automation (macOS / Windows / Linux)

This repository includes .github/workflows/release.yml.

  • Trigger: push a tag like v0.1.0
  • Quality gate: build, unit tests, license check, clippy
  • Publish edgesentry-rs to crates.io
  • Build eds binaries for Linux, macOS (x64 + arm64), and Windows
  • Upload packaged binaries to GitHub Release assets

Note: .github/workflows/ci.yml runs cargo publish --dry-run for edgesentry-rs.

Required GitHub secret:

  • CRATES_IO_TOKEN: crates.io API token used by cargo publish

Automatic Version Increment After Merge

This repository also includes .github/workflows/auto-version-tag.yml.

  • Trigger: when CI succeeds on main
  • Action: update workspace.package.version in Cargo.toml and create/push a vX.Y.Z tag
  • Then: release.yml is triggered by that tag and performs the full release pipeline

Version bump rules (Conventional Commits):

  • fix: -> patch bump (x.y.z -> x.y.(z+1))
  • feat: -> minor bump (x.y.z -> x.(y+1).0)
  • ! or BREAKING CHANGE -> major bump (x.y.z -> (x+1).0.0)

Operations Runbook

This page covers observability wiring, alert thresholds, and backup/restore procedures for a production EdgeSentry-RS deployment.


Observability

Structured logging with tracing

EdgeSentry-RS uses the tracing facade. No subscriber is bundled — deployers wire up the backend of their choice at application startup. The library emits zero overhead when no subscriber is registered.

Recommended subscriber for production (JSON over stdout, ingested by Loki / CloudWatch):

# Cargo.toml of the host application
tracing-subscriber = { version = "0.3", features = ["env-filter", "json"] }
use tracing_subscriber::{fmt, EnvFilter};

fn main() {
    fmt()
        .json()
        .with_env_filter(EnvFilter::from_default_env()) // RUST_LOG=edgesentry_rs=info
        .init();
    // ...
}

Set RUST_LOG=edgesentry_rs=info for production; edgesentry_rs=debug for incident investigation.

Structured log events emitted by the library

All events include the module path as target. Key events:

LevelTargetEventKey fields
DEBUGedgesentry_rs::agentsigning recorddevice_id, sequence, payload_bytes
DEBUGedgesentry_rs::ingest::storageingest starteddevice_id, sequence, object_ref, payload_bytes
WARNedgesentry_rs::ingest::storagepayload hash mismatch — record rejecteddevice_id, sequence
WARNedgesentry_rs::ingest::storageintegrity policy rejected recorddevice_id, sequence, reason
ERRORedgesentry_rs::ingest::storageraw data store write faileddevice_id, sequence, error
ERRORedgesentry_rs::ingest::storageaudit ledger append faileddevice_id, sequence, error
ERRORedgesentry_rs::ingest::storageoperation log write faileddevice_id, sequence, error
INFOedgesentry_rs::ingest::storagerecord accepteddevice_id, sequence, object_ref
DEBUGedgesentry_rs::ingest::verifysignature verification faileddevice_id, sequence
DEBUGedgesentry_rs::ingest::verifyduplicate record rejecteddevice_id, sequence
DEBUGedgesentry_rs::ingest::verifysequence out of orderdevice_id, expected, actual
DEBUGedgesentry_rs::ingest::verifyprev_record_hash mismatch — chain brokendevice_id, sequence
DEBUGedgesentry_rs::ingest::verifyrecord verified and accepteddevice_id, sequence

Use a log-to-metrics pipeline (e.g. Promtail + Loki, or Vector) to derive counters from structured log events:

MetricHow to deriveAlert threshold
edgesentry_ingest_accepted_totalCount INFO "record accepted" events
edgesentry_ingest_rejected_total{reason}Count WARN rejection events, label by reason field> 10/min sustained → P1 alert
edgesentry_ingest_error_total{component}Count ERROR storage failure events, label by component (raw_data_store / audit_ledger / operation_log)Any occurrence → P0 alert
edgesentry_chain_break_totalCount DEBUG "prev_record_hash mismatch" eventsAny occurrence → P0 alert
edgesentry_signature_fail_totalCount DEBUG "signature verification failed" events> 5/min sustained → P1 alert

OpenTelemetry (tracing spans)

The IngestService::ingest method emits a tracing span. Wire it to an OTLP exporter for distributed tracing:

opentelemetry = "0.26"
opentelemetry-otlp = { version = "0.26", features = ["grpc-tonic"] }
tracing-opentelemetry = "0.27"

Alert Definitions

AlertConditionSeverityResponse
IngestStorageErrorAny ERROR-level storage failureP0Check DB/S3 connectivity; verify disk and credentials
ChainBreakAny prev_record_hash mismatch eventP0Investigate tamper or replay; preserve logs before any restart
HighRejectionRateRejection rate > 10/min for 5 minP1Check device firmware; look for misconfigured signing key rotation
SignatureFailureSurgeSignature failures > 5/min for 5 minP1Possible key compromise or active spoofing attempt
AuditLedgerLagPostgres operation_logs insert latency > 2 s p99P1Check DB query plan; autovacuum contention

Recovery Objectives

ObjectiveTargetBasis
RTO (recovery time)< 30 minutesTime to restore Postgres from pg_basebackup + WAL replay
RPO (recovery point)< 5 minutesContinuous WAL archiving at 5-minute intervals

Backup Runbook

PostgreSQL — audit ledger and operation log

Prerequisites: WAL archiving enabled (archive_mode = on, archive_command shipping to S3 or equivalent).

1. Take a base backup

pg_basebackup \
  --host=<DB_HOST> \
  --username=<DB_USER> \
  --pgdata=/backup/pg_base_$(date +%Y%m%d_%H%M%S) \
  --format=tar \
  --gzip \
  --wal-method=stream \
  --checkpoint=fast \
  --progress

2. Verify the backup

pg_restore --list /backup/pg_base_<timestamp>/base.tar.gz | head -20

3. Archive WAL continuously

Ensure the archive_command in postgresql.conf ships WAL segments to durable storage (e.g. S3):

archive_command = 'aws s3 cp %p s3://<BUCKET>/wal/%f'

4. Retention policy

Backup typeRetention
Base backup30 days
WAL archive30 days
Logical dump (pg_dump)7 days (weekly)

S3 / MinIO — raw payload store

Enable versioning and cross-region replication on the bucket:

# Enable versioning
aws s3api put-bucket-versioning \
  --bucket <BUCKET> \
  --versioning-configuration Status=Enabled

# Enable replication (requires a destination bucket and IAM role configured separately)
aws s3api put-bucket-replication \
  --bucket <BUCKET> \
  --replication-configuration file://replication.json

Minimum replication target: one additional region. For CLS Level 3 evidence integrity, ensure object lock or versioning is enabled so payloads cannot be silently overwritten.


Restore Runbook

PostgreSQL — point-in-time recovery (PITR)

# 1. Stop the Postgres service
systemctl stop postgresql

# 2. Restore base backup
tar -xzf /backup/pg_base_<timestamp>/base.tar.gz -C /var/lib/postgresql/data/

# 3. Create recovery config
cat > /var/lib/postgresql/data/recovery.conf <<EOF
restore_command = 'aws s3 cp s3://<BUCKET>/wal/%f %p'
recovery_target_time = '<TARGET_TIMESTAMP>'
recovery_target_action = 'promote'
EOF

# 4. Start Postgres — it will replay WAL to the target time
systemctl start postgresql

# 5. Verify: query the last accepted sequence per device
psql -U <DB_USER> -d <DB_NAME> \
  -c "SELECT device_id, MAX(sequence) FROM audit_records GROUP BY device_id;"

Recovery verification checklist

  • Last record sequence per device matches pre-incident snapshot
  • Hash chain continuity verified: eds verify-chain <exported-records.json>
  • Operation log shows no unexpected gaps (check timestamps around recovery target)
  • Alert suppression lifted after verification completes

S3 / MinIO — object restore

# Restore a specific object version
aws s3api get-object \
  --bucket <BUCKET> \
  --key <OBJECT_KEY> \
  --version-id <VERSION_ID> \
  <OUTPUT_FILE>

Failure Drill Schedule

Run the following drills quarterly to verify runbook accuracy:

DrillProcedurePass criterion
DB failoverStop primary Postgres; promote replicaIngest resumes in < 30 min
DB restorePITR to 1 hour ago on stagingChain continuity verified in < 30 min
S3 object recoveryRestore a deleted test objectObject byte-identical to original
Alert fireInject a bad signature via test harnessP1 alert fires within 2 min