edgesentry-audit
“Trust and verification for edge infrastructure.”
- Repository: github.com/edgesentry/edgesentry-rs
- Documentation: edgesentry.github.io/edgesentry-rs/audit/en/
Why
In recent years, labor shortages have become a serious challenge in infrastructure operations. Labor-intensive industries such as construction are increasingly adopting IoT devices for remote inspections.
At the same time, if device spoofing, device takeover, or inspection data tampering occurs, trust in the entire system is fundamentally undermined. This makes continuous verification of both device authenticity and data integrity essential.
Vision and Principles
EdgeSentry-Audit is an early-stage learning project — we are building this to deepen our understanding of IoT security techniques hands-on. The license is commercially compatible (MIT/Apache 2.0), but the implementation is just getting started and is not yet production-ready. Following the governance model of successful “in-process” systems like DuckDB, we keep the core intellectual property open and vendor-neutral, so it can grow into a public good over time.
Our goal is to serve as the Common Trust Layer for vendors in public infrastructure, maritime (MPA), and smart buildings (BCA), helping them meet the highest regulatory standards — including Singapore’s CLS Level 3/4, iM8, and Japan’s Unified Government Standards.
We believe the infrastructure of trust should not be owned by a single private entity:
- Open for All: A vendor-agnostic reference implementation that lowers the barrier for companies to achieve regulatory compliance.
- Cross-Industry Learning: Engineers collaborate across corporate boundaries to master the complexities of global IoT security standards.
- Sustainable Growth: The core remains a community-driven reference implementation; commercial services (advanced analytics, automated compliance reporting) are built on top of this stable foundation.
See the Roadmap for the phased compliance plan.
Initial Scope
For public-infrastructure IoT deployments, Singapore’s Cybersecurity Labelling Scheme (CLS) Level 3 and Level 4 introduce hardware-level security requirements. EdgeSentry-Audit is designed to support these requirements through hardware extensions — hardware security itself is implemented on the hardware side, with this library providing the software integration layer. The initial scope covers tamper prevention and tamper-evident audit records, with hardware-level extension points built in from the start.
How
Modeled after the “Simple, Portable, Fast” philosophy, EdgeSentry-Audit implements three pillars of trust in Rust, designed for high-performance embedding:
-
Identity — Ed25519 digital signatures to guarantee the authenticity of both devices and data. Built with C/C++ FFI at its heart, allowing legacy industrial systems and robotics platforms to adopt secure identity without a full rewrite.
-
Integrity — BLAKE3 hash chains to ensure data immutability. Provides a verifiable cryptographic record that can be validated locally or in the cloud, ensuring forensic readiness even in offline scenarios.
-
Resilience — Store-and-forward offline buffering (
OfflineBufferwithInMemoryBufferStoreand SQLite viabuffer-sqlitefeature) is delivered in Phase 1, satisfying CLS-09. Intelligent data summarization for narrow-bandwidth environments (Phase 2 (planned)) will add priority queuing for limited links. See Roadmap.
edgesentry-audit is the crate name. The Rust library is imported as edgesentry_audit (underscores). It includes all audit record types, hashing, signature verification, chain verification, ingestion-time verification, deduplication, sequence validation, persistence workflow, and the CLI.
License
This project is licensed under either of:
At your option.
Roadmap
EdgeSentry-RS follows a phased approach: first establish the Singapore compliance baseline (CLS Level 2 → Level 3, SS 711:2025), then expand to Japan via GCLI mutual recognition (JC-STAR, Cyber Trust Mark), then achieve global convergence across EU, UK, and critical infrastructure markets. This mirrors the DuckDB model — build an embeddable OSS core that becomes a de facto standard through ecosystem adoption rather than lock-in.
Why Singapore First
Singapore’s CLS is directly derived from the European ETSI EN 303 645 standard. Japan’s JC-STAR similarly references ETSI EN 303 645 as its technical basis. This means the three regulatory regimes share a common foundation:
| Standard | Region | Based on |
|---|---|---|
| ETSI EN 303 645 | Europe (CRA) | Original |
| CLS Level 2/3/4 | Singapore | ETSI EN 303 645 |
| JC-STAR | Japan | ETSI EN 303 645 |
By implementing Singapore CLS compliance first, the majority of the technical work directly satisfies Japan’s JC-STAR and Europe’s CRA requirements. The Singapore gateway is not just a regional target — it is the fastest path to global compliance coverage.
GCLI and the Direct Japan-Singapore MoC
Japan signed the Global Cyber Labelling Initiative (GCLI) in 2025, joining 10 other countries including Singapore, UK, Finland, Germany, and Korea. GCLI establishes mutual recognition between national IoT security labels — a product certified under Singapore CLS is recognised as compliant with Japan’s JC-STAR without re-certification. This is the structural mechanism that makes the “Singapore first” strategy work as a Japan entry path.
In March 2026, Japan and Singapore reinforced this with a direct bilateral Memorandum of Cooperation (MoC) between METI/IPA (Japan) and CSA (Singapore), establishing direct mutual recognition of JC-STAR and CLS labels. The MoC takes effect on 1 June 2026. Under this arrangement a valid, current JC-STAR label is accepted as-is under CLS — no re-derivation of CLS compliance from JC-STAR data is required. Japan is the fifth country to achieve bilateral mutual recognition with Singapore CLS (after Finland, Germany, South Korea, and the UK).
Open question: The official level equivalence table mapping JC-STAR levels (STAR-1 through STAR-4) to CLS star levels (1–4) has not yet been published by CSA/METI. Monitor the CSA CLS page and METI/IPA JC-STAR page for this detail — it determines which JC-STAR level satisfies a given CLS target level.
Additional bilateral MRAs exist between Singapore CLS and Finland, Germany, and Korea. For Japanese customers already holding German or Korean IoT certification, these MRAs provide a fast-track CLS path.
SS 711:2025 Design Principles
Singapore’s national IoT standard SS 711:2025 (which replaces TR 64:2018 and underpins CLS Level 3 assessments) defines four security design principles. EdgeSentry-RS is designed around these:
| Principle | Requirement | Implementation |
|---|---|---|
| Secure by Default | Unique device identity, signed OTA | identity.rs (Ed25519), update.rs (signed update verification) |
| Rigour in Defence | STRIDE threat modelling, tamper detection | integrity.rs (BLAKE3 hash chain), STRIDE threat model artifacts |
| Accountability | Audit trail, operation logs | ingest/ (AuditLedger, OperationLog, IntegrityPolicyGate) |
| Resiliency | Deny-by-default networking, rate limiting | ingest/network_policy.rs (IP/CIDR allowlist) |
Implementation Mapping
For the detailed clause-by-clause mapping of CLS / ETSI EN 303 645 / JC-STAR requirements to source code, see the Compliance Traceability Matrix.
OSS scope
This repository implements the OSS audit layer: Ed25519 signing, BLAKE3 hash chain, ISO 19650 schema, and the eds verification CLI. All milestones in this document are open-source.
Commercial connectors (immugate WORM storage, CLS/JC-STAR compliance module, HSM key storage) are tracked in the commercial compliance layer.
Phase 1: The Singapore Gateway (Current – 6 Months)
Target: CLS Level 2 → Level 3, SS 711:2025, iM8
Deliver a software reference implementation that satisfies Singapore CLS Level 2 cyber hygiene requirements and advances to Level 3 with the SDL evidence artifacts (threat model, SBOM, binary analysis) that IMDA assessors require.
Milestone 1.1: Identity & Integrity Core ✅ Implemented
edgesentry_rs::identity— Ed25519 device signature implementationedgesentry_rs::integrity— BLAKE3 hash chain tamper-detection protocoledgesentry_rs::ingest::NetworkPolicy— deny-by-default IP/CIDR allowlist (CLS-06)
Milestone 1.2: The C/C++ Bridge ✅ Implemented
edgesentry-bridge— C-compatible FFI layer exposing Ed25519 signing, signature verification, and hash-chain validation to C/C++ projects- Goal: inject Singapore-grade security into existing Japanese hardware (gateways, sensors) with minimal modification
- See C/C++ FFI Bridge for usage, linking instructions, and memory safety conventions
Milestone 1.3: Compliance Mapping v1.0 ✅ Implemented
- Traceability matrix mapping Singapore CLS/iM8 clauses to source code: Compliance Traceability Matrix
Milestone 1.4: SBOM + Vendor Disclosure Checklist ✅ Implemented
IMDA’s IoT Cyber Security Guide requires a vendor disclosure checklist as CLS Level 3 assessment evidence. The five mandatory categories are: encryption support, identification and authentication, data protection, network protection, and lifecycle support (SBOM).
- CycloneDX JSON SBOM generated for all crates and published with each GitHub Release
- Vendor disclosure checklist responses documented for all five categories
- Responses mapped to implementation in the traceability matrix
- See SBOM and Vendor Disclosure and #92
Milestone 1.5: Transport Layer, Async Ingest & Offline Buffer ✅ Implemented
async-ingestfeature:AsyncIngestService<R,L,O>with&selfsignature for safe multi-task sharing viaArc— closed #115transport-httpfeature: axum-basedPOST /api/v1/ingestendpoint; source IP gated throughNetworkPolicybefore crypto verification;eds serveCLI — closed #116transport-tlsfeature:serve_tls()with rustls TLS 1.2/1.3;eds serve-tls --tls-cert / --tls-keyCLI; satisfies CLS-05 HTTP channel confidentiality — closed #176transport-mqtt-tlsfeature:MqttTlsConfigwith CA cert path, rustls-backed MQTTS via rumqttc;eds serve-mqtt --tls-ca-certCLI; satisfies CLS-05 MQTT channel confidentiality — closed #180transport-mqttfeature:serve_mqtt()subscribes to a configurable topic, routes records throughAsyncIngestService, publishes accept/reject to<topic>/response;eds serve-mqttCLI — closed #146buffermodule:OfflineBuffer<S>store-and-forward with pluggableBufferStoretrait;InMemoryBufferStoredefault;SqliteBufferStorebehindbuffer-sqlitefeature; satisfies CLS-09 resilience — closed #74
Milestone 1.6: STRIDE Threat Model + Binary Analysis Evidence ✅ Implemented
CLS Level 3 assessors expect recorded design artifacts, not just code. SS 711:2025 requires STRIDE-based threat modelling of all attack surfaces (API, communication, storage).
- STRIDE threat model covering: Spoofing (device identity), Tampering (audit records), Repudiation (operation logs), Information Disclosure (payload storage), Denial of Service (network policy), Elevation of Privilege (ingest gate) — see
docs/src/threat_model.md - Binary analysis evidence confirming no known CVEs in shipped crates (
cargo-audit,cargo-deny) - Threat model mitigations linked to traceability matrix entries — see
docs/src/traceability.md(Rigour in Defence updated ✅) - Japanese translation available at
docs/ja/src/threat_model.md - Closed: #93 via PR #143
Phase 2: Japan Adaptation via GCLI (6 – 12 Months)
Target: CLS Level 4, JC-STAR STAR-1/2, Cyber Trust Mark / ISO 27001
Milestone 2.0: Mutual Recognition Framework (GCLI + Japan-Singapore MoC) 🔲 Planned
Two complementary mechanisms enable Japan market entry without duplicate certification:
- GCLI — the multilateral framework (10+ countries) underpinning the overall Singapore-first strategy.
- Direct Japan-Singapore MoC (signed March 2026, effective 1 June 2026) — bilateral mutual recognition between JC-STAR and CLS. A valid JC-STAR label is accepted as-is under CLS; no re-mapping of certification data is required.
Deliverables for this milestone:
- Compliance pathway guide covering both the GCLI route and the direct MoC route for Japan-based customers
- JC-STAR label validation and attestation module (
edgesentry_rs::compliance::jcstar) — see #121 - CLS ↔ JC-STAR level equivalence table (pending publication by CSA/METI; monitor CSA and METI/IPA pages)
- MRA fast-track guidance for customers holding Finnish, German, or Korean IoT certification
- See #94
Milestone 2.1: JC-STAR STAR-1/2 Alignment 🔲 Planned
- Self-checklist and implementation guidance based on Japan’s IoT Product Security Conformity Assessment criteria
- See #82
Milestone 2.2: Edge Intelligence 🔲 Planned
edgesentry-summary— data summarisation logic for high-performance Japanese sensors over bandwidth-constrained links. See #83edgesentry-detector— local anomaly detection with signed audit evidence attached to results. See #84
Milestone 2.3: Cross-Border Education Program 🔲 Planned
- Joint technical white paper to help Japanese companies bid on Singapore public-infrastructure projects
- See #85
Milestone 2.4: Cyber Trust Mark / ISO 27001 Organisational Track 🔲 Planned
Singapore’s Cyber Trust Mark becomes mandatory for Critical Information Infrastructure (CII) operators from 2026–27. It is the organisational counterpart to CLS (which is product-level). B2B and government customers in Singapore will increasingly require vendors to support this track.
- Map EdgeSentry-RS implementation evidence to Cyber Trust Mark assessment categories
- ISO 27001 control alignment documentation
- See #95
Milestone 2.6: immugate WORM Storage Connector
Moved to the commercial compliance layer.
Milestone 2.7: ISO 19650 Information Container Schema 🔲 Planned
ISO 19650 defines the framework for managing information over the whole life cycle of a built asset using BIM. This milestone reframes each audit record as an ISO 19650 information container, enabling interoperability with third-party BIM tools and positioning the edgesentry-rs audit chain as a de facto standard for construction inspection traceability.
edgesentry_rs::audit::iso19650— information container payload schema (OSS)- Structured BIM status transitions: WIP → Shared → Published, with signed state change records
- Conformant metadata fields (revision, suitability, classification) mapped to the existing hash-chain record format
- Interoperability documentation for third-party BIM tool integration
- This milestone is the audit-crate implementation of the ISO 19650 layer described in the Inspect roadmap
Milestone 2.5: CLS(MD) — Medical Device Variant 🔲 Planned
Singapore launched CLS for Medical Devices (CLS(MD)) in October 2024. If medical IoT is a target market, specific variant requirements apply.
- CLS(MD) gap analysis against current implementation
- Medical device–specific requirements identification
- See #96
Phase 3: Global Convergence — “The European Horizon” (12 – 24 Months)
Target: EU CRA, UK PSTI Act, IEC 62443-4-2 (CII/OT), CCoP 2.0
Milestone 3.1: EU CRA Compliance Research 🔲 Planned
- Full mapping to ETSI EN 303 645 as a passport for the European market
- The Singapore CLS foundation covers the majority of CRA requirements with minimal additional work
Milestone 3.2: UK PSTI Act Alignment 🔲 Planned
The UK Product Security and Telecommunications Infrastructure (PSTI) Act aligns with ETSI EN 303 645 and became effective January 2026. Given CLS compliance, this requires near-zero additional implementation.
- Gap analysis between CLS Level 3 and UK PSTI requirements
- PSTI compliance statement documentation
- See #97
Milestone 3.3: IEC 62443-4-2 + Hardware RoT 🔲 Planned
IEC 62443-4-2 governs component-level requirements for Critical Infrastructure (CII) and OT markets. It requires hardware Root of Trust (TPM/HSM), RBAC, and Privileged Access Management (PAM) — distinct from ETSI EN 303 645.
- IEC 62443-4-2 component requirement mapping
- HSM integration via
edgesentry-bridgefor hardware-backed key storage (CLS Level 4) - RBAC/PAM design guidance for deployers
- See #54 and #98
Milestone 3.4: CCoP 2.0 / MTCS Tier 3 🔲 Planned
Singapore’s Cybersecurity Code of Practice 2.0 (CCoP 2.0) is the operational compliance requirement for CII sectors. MTCS Tier 3 applies if the platform has cloud or SaaS components targeting government contracts.
- CCoP 2.0 operational requirement mapping
- MTCS Tier 3 applicability assessment for cloud deployment scenarios
- See #99
Milestone 3.5: Formal Verification & Hardening 🔲 Planned
- Advanced memory safety and vulnerability hardening to withstand third-party binary analysis required for CLS Level 4
Milestone 3.6: Reference Architecture for AI Robotics 🔲 Planned
- Reference design for tamper-evident decision auditing in autonomous mobile robots (AMR) and inspection drones
Sustainable Ecosystem Strategy
Following the DuckDB model — a lightweight embeddable core that spreads via libraries rather than platforms:
-
“In-Process” Security — Embed as a library inside existing C++ applications regardless of OS or hardware, just as DuckDB embeds inside Python and Java processes.
-
Open Compliance — OSS the “how to achieve security” knowledge, so no single vendor controls the compliance pathway; the standard becomes public infrastructure.
-
Collaborative Learning — Provide a shared Rust codebase as a cross-company learning environment to develop the next generation of IoT security engineers.
Compliance Traceability Matrix
This page maps each Singapore CLS / iM8 clause and corresponding ETSI EN 303 645 provision to the source code that satisfies it. Japan JC-STAR cross-references and SS 711:2025 design principle alignment are included for each row.
Legend:
- ✅ Implemented
- ⚠️ Partial
- 🔲 Planned
- ➖ Not in scope
SS 711:2025 Design Principles Coverage
Singapore’s national IoT standard SS 711:2025 defines four principles. See the Roadmap for the full module mapping.
| Principle | SS 711:2025 Requirement | Status |
|---|---|---|
| Secure by Default | Unique device identity, signed OTA updates | ✅ identity.rs, update.rs |
| Rigour in Defence | STRIDE threat model, tamper detection | ✅ Hash chain (integrity.rs) + STRIDE threat model |
| Accountability | Audit trail, operation logs, RBAC design | ✅ ingest/ (AuditLedger, OperationLog) |
| Resiliency | Deny-by-default networking, DoS protection | ✅ ingest/network_policy.rs |
CLS Level 3 / ETSI EN 303 645 — Core Requirements
CLS-01 / §5.1 — No universal default passwords
| Item | Detail |
|---|---|
| JC-STAR | STAR-1 R3.1 |
| Requirement | Devices must not use universal default credentials |
| Status | ➖ Out of scope — this project implements software audit records, not device credential management |
CLS-02 / §5.2 — Implement a means to manage reports of vulnerabilities
| Item | Detail |
|---|---|
| JC-STAR | STAR-1 R4.1 |
| Requirement | A published, actionable vulnerability reporting channel with defined SLAs |
| Status | ✅ Implemented |
| Implementation | SECURITY.md — published disclosure policy with supported versions, private reporting via GitHub advisory, acknowledgement SLA (3 business days), patch SLA (30 days critical/high; 90 days medium/low), and defined in/out-of-scope |
| Implementation | GitHub private vulnerability reporting enabled — reporters use the Security Advisories form |
CLS-03 / §5.3 — Keep software updated
| Item | Detail |
|---|---|
| JC-STAR | STAR-2 R2.2 |
| Requirement | Software update packages must be signed and verified before installation |
| Status | ✅ Implemented |
| Implementation | UpdateVerifier::verify checks BLAKE3 payload hash then Ed25519 publisher signature before allowing installation; failed checks are logged as UpdateVerifyDecision::Rejected in UpdateVerificationLog (src/update.rs) |
| Tests | tests/unit/update_tests.rs — covers accepted path, tampered payload, invalid signature, unknown publisher, multi-publisher isolation |
CLS-04 / §5.4 — Securely store sensitive security parameters
| Item | Detail |
|---|---|
| JC-STAR | STAR-1 R1.2 |
| Requirement | Private keys must be stored securely; a key registration process must exist |
| Status | ✅ Implemented |
| Implementation | Public key registry: IntegrityPolicyGate::register_device (src/ingest/policy.rs:20) |
| Implementation | Key generation CLI: eds keygen (src/lib.rs — generate_keypair) |
| Implementation | Key inspection CLI: eds inspect-key (src/lib.rs — inspect_key) |
| Implementation | Provisioning and rotation guidance: Key Management |
| Note | HSM-backed key storage (CLS Level 4) is planned in #54 |
CLS-05 / §5.5 — Communicate securely
| Item | Detail |
|---|---|
| JC-STAR | STAR-1 R1.1 |
| Requirement | Data must be transmitted with authenticity guarantees |
| Status | ✅ Implemented |
| Implementation — record authenticity | Every AuditRecord carries an Ed25519 signature over its BLAKE3 payload hash — build_signed_record (src/agent.rs), sign_payload_hash (src/identity.rs:12) |
| Implementation — channel confidentiality (HTTP) | transport-tls feature: serve_tls() with rustls TLS 1.2/1.3, IP allowlist enforced before handshake, eds serve-tls --tls-cert / --tls-key CLI — closed #176 (src/transport/tls.rs) |
| Implementation — channel confidentiality (MQTT) | transport-mqtt-tls feature: MqttTlsConfig with CA cert path, rustls ClientConfig via rumqttc::TlsConfiguration::Rustls, eds serve-mqtt --tls-ca-cert CLI — closed #180 (src/transport/mqtt.rs) |
CLS-06 / §5.6 — Minimise exposed attack surfaces
| Item | Detail |
|---|---|
| JC-STAR | STAR-1 R3.2 |
| Requirement | Only necessary interfaces and services should be exposed |
| Status | ✅ Implemented |
| Implementation — IP allowlist | NetworkPolicy provides deny-by-default IP/CIDR allowlist enforcement (src/ingest/network_policy.rs) |
| Implementation — HTTP transport | ingest_handler enforces NetworkPolicy::check(source_ip) before any crypto verification; returns 403 Forbidden for unlisted sources (src/transport/http.rs) |
| Implementation — MQTT transport | serve_mqtt exposes a single subscribe-only topic; no administrative interface; broker-level ACLs recommended (src/transport/mqtt.rs) |
| Note | Network-level controls (VPN, firewall rules) remain the deployer’s responsibility |
CLS-07 / §5.7 — Ensure software integrity
| Item | Detail |
|---|---|
| JC-STAR | STAR-1 R1.3 |
| Requirement | The device must verify the integrity of software and data |
| Status | ✅ Implemented |
| Implementation — payload hash | BLAKE3 hash over raw payload: compute_payload_hash (src/integrity.rs:12) |
| Implementation — hash chain | prev_record_hash links each record to its predecessor; insertion/deletion detected by verify_chain (src/integrity.rs:35) |
| Tests | tampered_lift_demo_chain_is_detected (src/lib.rs:338) |
CLS-08 / §5.8 — Ensure that personal data is secure
| Item | Detail |
|---|---|
| JC-STAR | STAR-2 R4.1 |
| Requirement | Personal data transmitted or stored must be protected |
| Status | ➖ Out of scope — audit records do not contain personal data in the current implementation |
CLS-09 / §5.9 — Make systems resilient to outages
| Item | Detail |
|---|---|
| JC-STAR | STAR-2 R3.2 |
| Requirement | The device should remain operational and recover gracefully |
| Status | ⚠️ Partial |
| Implementation | OfflineBuffer<S> accumulates signed records during connectivity loss and replays them in insertion order via flush when the link recovers. Duplicate records from replay are treated as already-accepted and do not cause failures (src/buffer/mod.rs) |
| Implementation | Pluggable BufferStore trait — volatile InMemoryBufferStore (default) and durable SqliteBufferStore behind the buffer-sqlite feature flag |
| Gap | Full HA (active–active failover, network-level redundancy) remains the deployer’s responsibility |
CLS-10 / §5.10 — Examine system telemetry data
| Item | Detail |
|---|---|
| JC-STAR | STAR-2 R3.1 |
| Requirement | Security-relevant events must be logged and replay/reorder attacks must be detected |
| Status | ✅ Implemented |
| Implementation — sequence | Strict monotonic sequence per device; duplicates and out-of-order records rejected by IngestState::verify_and_accept (src/ingest/verify.rs:45) |
| Implementation — audit trail | Accept/reject decisions persisted via IngestService and AuditLedger (src/ingest/storage.rs) |
CLS-11 / §5.11 — Make it easy for users to delete user data
| Item | Detail |
|---|---|
| JC-STAR | — |
| Requirement | Users should be able to delete personal data |
| Status | ➖ Out of scope |
CLS Level 4 — Additional Requirements
CLS Level 4 — Hardware Security Module (HSM)
| Item | Detail |
|---|---|
| JC-STAR | STAR-2 R1.4 |
| Requirement | Private keys must be stored and used inside an HSM |
| Status | 🔲 Planned |
| Gap | HSM-backed key storage planned for Phase 3 (IEC 62443-4-2 / CII/OT). See #54 and #98 |
JC-STAR Additional Requirements
STAR-1 R2.1 — Replay and reorder prevention
| Item | Detail |
|---|---|
| CLS | CLS-10 |
| Requirement | Replay attacks must be detected and rejected |
| Status | ✅ Implemented |
| Implementation | seen HashSet in IngestState rejects duplicate (device_id, sequence) pairs (src/ingest/verify.rs:56) |
Coverage Summary
| Level | Total clauses | ✅ Implemented | ⚠️ Partial | 🔲 Planned | ➖ Out of scope |
|---|---|---|---|---|---|
| CLS Level 3 | 11 | 6 | 2 | 0 | 3 |
| CLS Level 4 | 1 | 0 | 0 | 1 | 0 |
| JC-STAR additions | 1 | 1 | 0 | 0 | 0 |
Note: “Out of scope” clauses cover device-level concerns (passwords, network interfaces, personal data) that are the responsibility of the deployer, not the audit-record library.
STRIDE Threat Model
This document is a formal threat-modelling artifact produced for Singapore CLS Level 3 assessment under SS 711:2025 Rigour in Defence and the IMDA IoT Cyber Security Guide threat-modelling checklist. It covers all attack surfaces of the EdgeSentry-RS system: API, communication channel, and storage.
Methodology: STRIDE (Microsoft)
Scope: edgesentry-rs library and edgesentry-bridge FFI crate — device-side signing, cloud-side ingest, HTTP transport, operation log, and audit ledger.
Assessor reference: SS 711:2025 §4.2 Rigour in Defence; IMDA IoT Cyber Security Guide §3 Threat Modelling Checklist
System Overview
┌─────────────────────────────────────────────────────────────────┐
│ Field Device (edge) │
│ ┌────────────────────────────────────────────────────────────┐ │
│ │ build_signed_record() │ │
│ │ payload → BLAKE3 hash → Ed25519 sign → AuditRecord │ │
│ └────────────────────────────────────────────────────────────┘ │
└────────────────────────────┬────────────────────────────────────┘
│ POST /api/v1/ingest (JSON over HTTPS)
▼
┌─────────────────────────────────────────────────────────────────┐
│ Cloud Ingest Layer │
│ ┌────────────────┐ ┌─────────────────┐ ┌─────────────────┐ │
│ │ NetworkPolicy │ │ IntegrityPolicy │ │ AsyncIngest │ │
│ │ IP/CIDR gate │→ │ Gate │→ │ Service │ │
│ │ (deny-default) │ │ (signature + │ │ (hash chain + │ │
│ └────────────────┘ │ chain verify) │ │ sequence) │ │
│ └─────────────────┘ └────────┬────────┘ │
│ │ │
│ ┌────────────────────────────────────────┤ │
│ ▼ ▼ ▼ │
│ ┌──────────────────┐ ┌─────────────────────┐ ┌──────────┐ │
│ │ Raw Data Store │ │ Audit Ledger │ │ Op. Log │ │
│ │ (S3 / memory) │ │ (Postgres / memory)│ │ │ │
│ └──────────────────┘ └─────────────────────┘ └──────────┘ │
└─────────────────────────────────────────────────────────────────┘
STRIDE Threat Analysis
S — Spoofing (Device Identity)
Threat: An attacker impersonates a legitimate field device by forging the device_id field or replaying records signed by a compromised key.
Attack surface: POST /api/v1/ingest — AuditRecord.device_id and AuditRecord.signature fields.
| Sub-threat | Description |
|---|---|
| S-1 | Attacker sends records with a valid device_id but self-generated Ed25519 key (unregistered) |
| S-2 | Attacker replays a previously captured, legitimately signed record |
| S-3 | Attacker sends records with a forged device_id that does not match the signing key |
Mitigations:
| ID | Mitigation | Code location |
|---|---|---|
| M-S-1 | Device public keys are pre-registered on the cloud side; any signature that does not verify against the registered key is rejected with IngestError::UnknownDevice | ingest/policy.rs IntegrityPolicyGate::enforce() |
| M-S-2 | Monotonic sequence numbers and prev_record_hash chain continuity are enforced; replayed records are detected as duplicate sequences | ingest/verify.rs check_sequence() |
| M-S-3 | Ed25519 signatures bind the payload hash to the private key; a forged device_id with the wrong key fails signature verification | identity.rs verify_payload_signature() |
Residual risk: If a device’s private key is physically extracted, records can be forged with valid signatures. Hardware-backed key storage (TPM/SE) is a device-layer control outside the scope of this library; it is noted in the Roadmap.
T — Tampering (Audit Records)
Threat: An attacker modifies an audit record or its raw payload in transit or at rest.
Attack surface: Wire format (JSON body), raw data store (S3 objects), audit ledger (database rows).
| Sub-threat | Description |
|---|---|
| T-1 | Attacker modifies raw_payload_hex in the HTTP request body |
| T-2 | Attacker modifies AuditRecord.payload_hash to match a different payload |
| T-3 | Attacker flips bytes in a stored S3 object after accepted ingest |
| T-4 | Attacker modifies prev_record_hash to break or redirect the chain |
Mitigations:
| ID | Mitigation | Code location |
|---|---|---|
| M-T-1 | On every ingest the cloud recomputes BLAKE3(raw_payload) and compares it to record.payload_hash; mismatch → PayloadHashMismatch rejection | ingest/storage.rs IngestService::ingest() |
| M-T-2 | payload_hash is covered by the Ed25519 signature; if the hash is changed the signature no longer verifies | identity.rs verify_payload_signature() |
| M-T-3 | Post-ingest tampering of stored objects is detectable by re-verifying the hash from the ledger against the object content; this is an operational control described in the Operations Runbook | |
| M-T-4 | prev_record_hash is validated against the previous accepted record’s hash(); a break in continuity rejects all subsequent records | ingest/verify.rs check_chain_link() |
Residual risk: Tampering of stored objects after acceptance is a storage-layer concern. Enabling S3 Object Lock (WORM) or database row-level checksums at the deployment layer eliminates this residual.
R — Repudiation (Operation Logs)
Threat: A device or operator denies that a specific ingest event occurred, or claims a record was never sent / was rejected without evidence.
Attack surface: OperationLog entries written during ingest; audit ledger append operations.
| Sub-threat | Description |
|---|---|
| R-1 | Device claims a record was never submitted |
| R-2 | Operator claims a record was rejected when it was accepted (or vice versa) |
| R-3 | Operation log entries are deleted or modified after the fact |
Mitigations:
| ID | Mitigation | Code location |
|---|---|---|
| M-R-1 | Every ingest attempt — accepted or rejected — writes an OperationLogEntry with device_id, sequence, decision, and message; the log is written before the ingest function returns | ingest/storage.rs log_acceptance() / log_rejection() |
| M-R-2 | IngestDecision::Accepted / Rejected is persisted to the operation log atomically with the decision; the record’s signed hash serves as cryptographic proof of submission | ingest/storage.rs OperationLogEntry |
| M-R-3 | Append-only operation logs (Postgres INSERT-only pattern; no DELETE/UPDATE on log rows) prevent after-the-fact modification | ingest/storage.rs PostgresOperationLog; enforcement at the DB-user permission level |
Residual risk: The library provides the operation log data; protecting that data from privileged insider deletion requires database-level controls (role separation, audit logging at the DB layer).
I — Information Disclosure (Payload Storage)
Threat: Sensitive inspection payload data is exposed to an unauthorised party.
Attack surface: HTTP request body (raw_payload_hex), raw data store (S3), audit ledger, operation log.
| Sub-threat | Description |
|---|---|
| I-1 | Eavesdropping on the HTTP transport channel |
| I-2 | Unauthorised read access to S3 objects or Postgres rows |
| I-3 | Payload bytes appear in error messages or logs |
Mitigations:
| ID | Mitigation | Code location |
|---|---|---|
| M-I-1 | The HTTP transport is designed to run behind TLS termination (load balancer / Nginx / Cloudflare); raw payload is hex-encoded in the JSON body and must be carried over HTTPS | transport/http.rs — TLS is a deployment-layer control; noted in Operations Runbook |
| M-I-2 | Raw payloads are stored by object_ref under the caller-specified key; access control is enforced by the storage layer (S3 bucket policy, Postgres GRANT); the library does not expose read APIs to unauthenticated callers | ingest/storage.rs RawDataStore::put() |
| M-I-3 | Error messages include device_id and sequence but never the raw payload bytes; tracing spans log payload_bytes length only | ingest/storage.rs #[instrument(skip(raw_payload))] |
Residual risk: Encryption at rest for S3 objects and Postgres rows is a deployment-layer control (S3 SSE-KMS, Postgres pgcrypto or TDE). TLS 1.3 for the ingest HTTP endpoint is addressed in the Roadmap (issue #73).
D — Denial of Service (Network Policy)
Threat: An attacker floods the ingest endpoint to exhaust resources and prevent legitimate devices from submitting records.
Attack surface: POST /api/v1/ingest HTTP endpoint; NetworkPolicy check; AsyncIngestService tokio task pool.
| Sub-threat | Description |
|---|---|
| D-1 | High-volume requests from untrusted IPs overwhelm the handler |
| D-2 | Large raw_payload_hex values exhaust memory |
| D-3 | Malformed JSON bodies consume parse time |
Mitigations:
| ID | Mitigation | Code location |
|---|---|---|
| M-D-1 | NetworkPolicy deny-by-default: all IPs and CIDR ranges are blocked unless explicitly allowlisted; unapproved source IPs receive 403 Forbidden before any cryptographic work is performed | ingest/network_policy.rs NetworkPolicy::check(); transport/http.rs handler |
| M-D-2 | Axum’s default request body size limit (2 MB) caps payload size; the raw_payload_hex field is bounded by the HTTP body limit | transport/http.rs — axum default body limit |
| M-D-3 | JSON deserialization errors return 400 Bad Request immediately; no downstream processing occurs | transport/http.rs — axum Json extractor |
Residual risk: Rate limiting per source IP and per device is not yet implemented in the library layer; it should be added at the reverse proxy or API gateway layer in production deployments. Issue #73 (TLS, P2) is the planned follow-up milestone.
E — Elevation of Privilege (Ingest Gate)
Threat: An attacker bypasses the ingest validation gate to write arbitrary records to the ledger or raw data store.
Attack surface: IntegrityPolicyGate, ingest_handler, and the service registration API (register_device).
| Sub-threat | Description |
|---|---|
| E-1 | Attacker calls ingest with a record for an unregistered device and succeeds |
| E-2 | Attacker submits a record with a valid sequence/chain for a device they do not control |
| E-3 | Attacker registers a malicious device by calling register_device directly |
Mitigations:
| ID | Mitigation | Code location |
|---|---|---|
| M-E-1 | IntegrityPolicyGate::enforce() is called unconditionally before any storage write; unknown devices fail with IngestError::UnknownDevice | ingest/policy.rs |
| M-E-2 | Signature verification uses the registered public key for device_id; a valid chain cannot be forged without the device’s private key | identity.rs verify_payload_signature() |
| M-E-3 | register_device is a privileged operation called only by the application layer at startup; the HTTP ingest handler does not expose device registration over the network | transport/http.rs — no registration endpoint; ingest/storage.rs AsyncIngestService::register_device() |
Residual risk: If the application layer that calls register_device is compromised, arbitrary devices can be registered. This is an operational security control: registration should be gated behind a separate privileged API with strong authentication.
Binary Analysis Evidence
cargo audit — Advisory Database Scan
Command and output captured at document generation time (advisory database commit: current):
cargo audit
Result: All detected advisories are pre-approved in deny.toml (see table below):
| Advisory | Crate | Version | Status | Reason |
|---|---|---|---|---|
| RUSTSEC-2026-0049 | rustls-webpki | 0.101.7 | Ignored (#125) | Pinned by aws-smithy-http-client legacy hyper-rustls 0.24 → rustls 0.21 chain; no 0.101.x patch exists. The 0.103.x instance in the tree is updated to 0.103.10. |
| RUSTSEC-2026-0049 | rustls-webpki | 0.102.8 | Ignored (#166) | Pinned by rumqttc 0.25 → rustls 0.22 chain; fix requires rumqttc to adopt rustls 0.23+. No CRL revocation calls in the codebase; unexploitable as-is. |
All remaining scanned crate dependencies: no known CVEs.
To reproduce:
cargo install cargo-audit --locked
cargo audit
cargo deny check — Policy Enforcement
Command:
cargo deny check
Result: advisories ok, bans ok, licenses ok, sources ok
The deny.toml policy enforces:
- Advisories: all vulnerabilities denied by default except explicitly ignored entries with documented reasons
- Bans: multiple crate versions warned; wildcard dependencies warned
- Licenses: only MIT, Apache-2.0, BSD-2-Clause, BSD-3-Clause, Unicode-3.0, CC0-1.0, Zlib permitted; one exception:
cbindgen(MPL-2.0, build-only header generator — copyleft does not extend to generated artifacts or source) - Sources: only
crates.ioand trusted git sources
To reproduce:
cargo install cargo-deny --locked
cargo deny check
Threat-to-Mitigation Traceability Summary
| STRIDE Category | Threat ID | Mitigation ID | Source File | Status |
|---|---|---|---|---|
| Spoofing | S-1 | M-S-1 | ingest/policy.rs | ✅ |
| Spoofing | S-2 | M-S-2 | ingest/verify.rs | ✅ |
| Spoofing | S-3 | M-S-3 | identity.rs | ✅ |
| Tampering | T-1 | M-T-1 | ingest/storage.rs | ✅ |
| Tampering | T-2 | M-T-2 | identity.rs | ✅ |
| Tampering | T-3 | M-T-3 | Operational control | ⚠️ Deployment |
| Tampering | T-4 | M-T-4 | ingest/verify.rs | ✅ |
| Repudiation | R-1 | M-R-1 | ingest/storage.rs | ✅ |
| Repudiation | R-2 | M-R-2 | ingest/storage.rs | ✅ |
| Repudiation | R-3 | M-R-3 | DB permission layer | ⚠️ Deployment |
| Information Disclosure | I-1 | M-I-1 | Deployment (TLS) | ⚠️ #73 |
| Information Disclosure | I-2 | M-I-2 | Storage access control | ⚠️ Deployment |
| Information Disclosure | I-3 | M-I-3 | ingest/storage.rs | ✅ |
| Denial of Service | D-1 | M-D-1 | ingest/network_policy.rs, transport/http.rs | ✅ |
| Denial of Service | D-2 | M-D-2 | transport/http.rs (axum body limit) | ✅ |
| Denial of Service | D-3 | M-D-3 | transport/http.rs | ✅ |
| Elevation of Privilege | E-1 | M-E-1 | ingest/policy.rs | ✅ |
| Elevation of Privilege | E-2 | M-E-2 | identity.rs | ✅ |
| Elevation of Privilege | E-3 | M-E-3 | transport/http.rs | ✅ |
Legend: ✅ Implemented in library code — ⚠️ Deployment-layer control (outside library scope)
SBOM and Vendor Disclosure Checklist
This page satisfies the IMDA IoT Cyber Security Guide lifecycle support evidence requirement for Singapore CLS Level 3 assessment. It covers the SBOM format, generation procedure, and vendor disclosure checklist responses for the five mandatory categories.
Software Bill of Materials (SBOM)
Format
EdgeSentry-RS publishes SBOMs in CycloneDX JSON format (spec version 1.3), generated from Cargo.lock at release time using cargo-cyclonedx.
Published artifacts
Each GitHub Release includes two SBOM files as release assets. Download them from the Releases page:
https://github.com/edgesentry/edgesentry-rs/releases/tag/v<version>
| File | Scope |
|---|---|
edgesentry-rs-<version>.cdx.json | edgesentry-rs crate and all transitive dependencies |
edgesentry-bridge-<version>.cdx.json | edgesentry-bridge C/C++ FFI crate and its dependencies |
For example, for v0.1.2:
https://github.com/edgesentry/edgesentry-rs/releases/download/v0.1.2/edgesentry-rs-0.1.2.cdx.jsonhttps://github.com/edgesentry/edgesentry-rs/releases/download/v0.1.2/edgesentry-bridge-0.1.2.cdx.json
Generating the SBOM locally
cargo install cargo-cyclonedx --locked
cargo cyclonedx --format json --all
# Output: crates/edgesentry-rs/edgesentry-rs.cdx.json
# crates/edgesentry-bridge/edgesentry-bridge.cdx.json
Inspecting dependency counts
Run after generating to see the current component count (changes with every dependency update):
cargo cyclonedx --format json --all
python3 -c "
import json
for f in ['crates/edgesentry-rs/edgesentry-rs.cdx.json',
'crates/edgesentry-bridge/edgesentry-bridge.cdx.json']:
bom = json.load(open(f))
print(f\"{f}: {len(bom.get('components', []))} components\")
"
Continuous supply-chain monitoring
cargo-audit— run on every CI build and PR; checks all dependencies against the RustSec Advisory Databasecargo-deny— enforces licence policy and bans on every CI build- Dependabot — weekly automated dependency version update PRs
Vendor Disclosure Checklist
The IMDA IoT Cyber Security Guide requires responses across five categories. The table below documents EdgeSentry-RS’s position for each.
1. Encryption Support
| Item | Response |
|---|---|
| Algorithms used | Ed25519 (signing), BLAKE3 (hashing) |
| Key length | Ed25519: 256-bit; BLAKE3 output: 256-bit |
| Random number generation | OS CSPRNG via rand::OsRng — no custom RNG |
| Transport encryption | Record-level: Ed25519 signature over payload hash. Native TLS transport is provided: eds serve-tls --tls-cert / --tls-key (rustls TLS 1.2/1.3, HTTP) and eds serve-mqtt --tls-ca-cert (MQTT over TLS). See CLS-05 in the Traceability Matrix. |
| Key storage | Public-key registry in memory (IntegrityPolicyGate); private key files managed by the deployer. HSM-backed storage planned: #54 |
| Implementation | crates/edgesentry-rs/src/identity.rs, crates/edgesentry-rs/src/integrity.rs |
2. Identification and Authentication
| Item | Response |
|---|---|
| Device authentication method | Ed25519 asymmetric key pair: device signs each record with its private key; cloud verifies against the registered public key |
| Credential storage | Private key held exclusively on the device; public key registered on the cloud side via IntegrityPolicyGate::register_device |
| Default credentials | None — each device generates a unique keypair via eds keygen |
| Brute-force protection | Signature verification is a single constant-time operation; no credential-based login surface exists |
| Route identity enforcement | cert_identity parameter in IngestService::ingest — mismatch between TLS client certificate identity and record.device_id causes immediate rejection |
| Implementation | crates/edgesentry-rs/src/identity.rs, crates/edgesentry-rs/src/ingest/policy.rs |
3. Data Protection
| Item | Response |
|---|---|
| Data in transit | Every AuditRecord carries an Ed25519 signature over its BLAKE3 payload hash — authenticity guaranteed at the record level regardless of transport |
| Data at rest | Raw payloads stored via RawDataStore (S3/MinIO); audit records via AuditLedger (PostgreSQL). Encryption at rest is the deployer’s responsibility (S3 SSE, Postgres column encryption) |
| Personal data | AuditRecord contains no personal data fields by design — object_ref points to a storage key; the payload body is stored separately |
| Data minimisation | Audit metadata (payload_hash, signature, prev_record_hash) is separated from payload body — cloud stores only the hash chain; raw data stored independently via object_ref |
| Implementation | crates/edgesentry-rs/src/record.rs, crates/edgesentry-rs/src/ingest/storage.rs |
4. Network Protection
| Item | Response |
|---|---|
| Unnecessary ports/services | Library only — no network service is opened by edgesentry-rs. Transport is the deployer’s responsibility |
| Deny-by-default network policy | NetworkPolicy enforces an IP/CIDR allowlist; check(source_ip) is called before any cryptographic operation — all unlisted sources are rejected |
| DoS resilience | NetworkPolicy gate rejects unlisted sources before any cryptographic processing, limiting the attack surface. Full rate-limiting is a deployer concern |
| Implementation | crates/edgesentry-rs/src/ingest/network_policy.rs |
| CLS reference | CLS-06 / ETSI EN 303 645 §5.6 |
5. Lifecycle Support
| Item | Response |
|---|---|
| Vulnerability reporting | GitHub private vulnerability reporting enabled. See SECURITY.md — SLA: acknowledge 3 business days; patch 30 days (critical/high), 90 days (medium/low) |
| SBOM availability | CycloneDX JSON published with every GitHub Release (see above) |
| Dependency advisory scanning | cargo-audit on every CI build + PR against RustSec Advisory DB |
| End-of-life policy | edgesentry-rs v0.x: current version supported. Security updates are patch releases |
| Software update integrity | UpdateVerifier checks BLAKE3 payload hash and Ed25519 publisher signature before any update is applied — see CLS-03 |
| Supported versions | See SECURITY.md |
| CLS reference | CLS-02 / ETSI EN 303 645 §5.2 |
Traceability
This document satisfies Milestone 1.4 in the Roadmap. For the full clause-by-clause compliance mapping see the Compliance Traceability Matrix.
Concepts in edgesentry-rs
This document summarizes the core concepts used in this repository.
1. Tamper-evident design
The primary goal is not “perfect tamper prevention,” but “reliable tamper detection.”
- Compute a hash from the original payload
- Sign the hash with a device private key
- Link records through a hash chain
Together, these mechanisms detect tampering, spoofing, and record reordering.
2. AuditRecord
The basic unit of evidence is AuditRecord. Key fields:
device_id: source device identitysequence: monotonically increasing sequence numbertimestamp_ms: event timestamppayload_hash: hash of raw payload datasignature: signature overpayload_hashprev_record_hash: hash of the previous audit recordobject_ref: reference to raw payload storage (for example,s3://...)
3. Hash and signature
3.1 Hash (integrity)
- Purpose: fingerprint of payload content
- Property: even a 1-byte payload change produces a different hash
3.2 Signature (authenticity)
- Purpose: prove the payload hash was produced by a trusted device key
- Verification: validate with the registered device public key
4. Hash chain continuity
Records are linked by prev_record_hash.
- First record:
prev_record_hash = zero_hash - Subsequent records: must match the previous record’s
hash()
This detects insertion, deletion, and substitution inside the chain.
5. Sequence policy
sequence must increase per device as 1, 2, 3, …
- Duplicate sequence values are rejected
- Gaps or out-of-order sequences are rejected
6. Software update integrity
Before a device applies any firmware or software update, the update package must pass two checks via edgesentry_rs::update::UpdateVerifier:
- Payload hash —
BLAKE3(raw_payload)must match the hash embedded in theSoftwareUpdatemanifest - Publisher signature — the Ed25519 signature over that hash must verify against a registered trusted publisher key
Every attempt (accepted or rejected) is appended to UpdateVerificationLog for auditing. This satisfies CLS-03 / ETSI EN 303 645 §5.3 / JC-STAR STAR-2 R2.2.
7. Network policy (deny-by-default)
edgesentry_rs::ingest::NetworkPolicy enforces a deny-by-default IP/CIDR allowlist for incoming connections. Callers call NetworkPolicy::check(source_ip) before passing a record to IngestService. Connections from unlisted addresses are rejected without reaching any cryptographic check.
Rules are additive: allow_ip(addr) for exact matches and allow_cidr("10.0.0.0/8") for CIDR blocks (IPv4 and IPv6). An empty policy denies everything.
8. Ingest-time verification
edgesentry_rs::ingest is responsible for completing trust checks before persistence.
The full check order when ingesting a record is:
- Network gate —
NetworkPolicy::check(source_ip)denies unlisted sources before any crypto runs - Payload hash —
IngestServiceverifies raw payload matchesrecord.payload_hash - Route identity —
cert_identitymust matchrecord.device_idwhen present - Signature — payload hash must be signed by the registered device key
- Sequence — must be strictly monotonic and non-duplicate per device
- Previous-record hash — must chain from the last accepted record’s hash
Steps 3–6 are enforced by IntegrityPolicyGate; step 2 by IngestService before invoking the gate.
9. Storage model
On accepted ingest, the system stores:
- Raw data (payload body)
- Audit ledger (audit record stream)
- Operation log (accept/reject decisions)
This separation keeps evidence metadata and payload storage independently manageable.
10. Demo modes
10.1 Library example (no DB/MinIO required)
- Run:
cargo run -p edgesentry-rs --example lift_inspection_flow - Uses in-memory stores
- Fast path to verify signing, ingest verification, and tamper rejection
10.2 Interactive local demo (DB/MinIO required)
- Run:
bash scripts/local_demo.sh - End-to-end flow with PostgreSQL + MinIO + CLI
- Shows persisted audit records and operation logs
11. Trust boundary
- Device side: signs facts and emits compact audit metadata
- Cloud side: enforces strict verification rules before accepting data
This split keeps edge and cloud responsibilities clear and auditable.
12. Quality and release concepts
- Static analysis:
clippy - OSS license policy validation:
cargo-deny - Advisory scanning:
cargo-audit(CVE checks against RustSec advisory DB) - Release readiness: CI + release workflows
- Tag-driven release:
vX.Y.Z
See Contributing and Build and Release for executable procedures.
13. STRIDE threat model
SS 711:2025 and the IMDA IoT Cyber Security Guide require recorded STRIDE-based threat model artifacts for CLS Level 3 assessment. The six threat categories map to EdgeSentry-RS attack surfaces as follows:
| Threat | Attack surface | Mitigation |
|---|---|---|
| Spoofing | Device identity | Ed25519 signature — only the registered public key can verify a record |
| Tampering | Audit records, payload storage | BLAKE3 hash chain — any modification breaks chain continuity |
| Repudiation | Ingest decisions | OperationLog records every accept/reject decision with reason |
| Information Disclosure | Raw payload storage | object_ref separation keeps payload body out of the audit metadata stream |
| Denial of Service | Ingest endpoint | NetworkPolicy deny-by-default rejects unlisted sources before any crypto runs |
| Elevation of Privilege | Ingest gate | IntegrityPolicyGate verifies device registration and signature before accepting data |
Producing the formal design artifact for CLS Level 3 assessment is tracked in #93.
14. SBOM (Software Bill of Materials)
A Software Bill of Materials lists all software components and their versions used in a product. The IMDA IoT Cyber Security Guide requires SBOM availability as part of the lifecycle support category in the vendor disclosure checklist — a mandatory CLS Level 3 evidence artifact.
For Rust projects, SBOM is generated from Cargo.lock using tools such as cargo-sbom or cargo-cyclonedx, producing a machine-readable inventory of all crates and their transitive dependencies.
Generating and publishing the SBOM alongside the vendor disclosure checklist is tracked in #92.
Architecture
Device Side vs Cloud Side
This system assumes a public-infrastructure IoT deployment where field devices (for example, lift inspection devices) send inspection evidence to cloud services.
Device side (resource-constrained edge)
The device-side responsibility is implemented by edgesentry_rs::build_signed_record and related functions.
- Generate inspection event payloads (door check, vibration check, emergency brake check)
- Compute
payload_hash(BLAKE3) - Sign the hash using an Ed25519 private key
- Link each event to the previous record hash (
prev_record_hash) so records form a chain - Send only compact audit metadata plus object reference (
object_ref) to keep edge-side cost low
Cloud side (verification and trust enforcement)
The cloud-side responsibility is implemented by edgesentry_rs::ingest and related modules.
- Gate incoming connections to approved IP addresses and CIDR ranges (
NetworkPolicy::check) — deny-by-default - Verify that the device is known (
device_id-> public key) - Verify signature validity for each incoming record
- Enforce sequence monotonicity and reject duplicates
- Enforce hash-chain continuity (
prev_record_hashmust match previous record hash) - Reject tampered, replayed, or reordered data before persistence
Shared trust logic
All hashing and verification rules live in the same edgesentry-rs crate, keeping logic identical across edge and cloud usage.
Resource-Constrained Device Design
The device-side design is intentionally lightweight so it can be adapted to Cortex-M class environments.
- Small cryptographic footprint: records store fixed-size hashes (
[u8; 32]) and signatures ([u8; 64]) - Minimal compute path: hash and sign only; no heavy server-side validation logic on device
- Compact wire format readiness: record structure is deterministic and serializable (
serde+postcardsupport in core) - Offload heavy work to cloud: duplicate detection, sequence policy checks, and full-chain verification are cloud concerns
- Tamper-evident by construction: a one-byte modification breaks signature checks or chain continuity
Concrete Design Flow
- Device creates event payload
D. - Device computes
H = hash(D)and signsH→ signatureS. - Device emits
AuditRecord { device_id, sequence, timestamp_ms, payload_hash=H, signature=S, prev_record_hash, object_ref }. - Cloud verifies signature with registered public key.
- Cloud verifies sequence and previous-hash link.
- If any check fails, ingest is rejected; otherwise the record is accepted.
In short, the edge signs facts, and the cloud enforces continuity and authenticity.
Notarization Metadata Schema
For AI inference results to serve as legally admissible evidence (BCA/CONQUAS inspection reports, MPA ship certificates, MLIT near-visual-inspection equivalence), the audit record payload must capture five categories of provenance metadata in addition to the cryptographic chain. This is the target schema for the notarization connector.
| Category | Fields | Purpose |
|---|---|---|
| Sensor | sensor_id, calibration_ts, firmware_version, sampling_rate | Prove the measuring instrument was calibrated and operating within spec at capture time |
| AI model | model_uuid, model_arch, weight_sha256, prompt_version | Enable third-party reproduction of the same inference output from the same input (AI Verify Outcome 3.1 / 3.5) |
| Compute environment | device_type, os_version, dependency_hashes, hw_temp_c | Full runtime reproducibility; hardware temperature flags thermal throttling that could affect inference timing |
| Context | ntp_ts, gps_lat_lon (or indoor position), input_data_hash | Bind the record to a specific physical location and moment; input_data_hash prevents payload substitution |
| Inference process | confidence_score, preprocessing_algo, guardrail_actions | Support human-in-the-loop triage (AI Verify Outcome 4.5); low-confidence records can be routed for manual review |
These fields are stored in the payload object alongside the domain-specific detection data. The payload_hash in AuditRecord covers the entire payload, so any metadata field change invalidates the signature.
ALCOA+ alignment: The five categories map directly to the ALCOA+ data integrity framework required for regulatory submissions — Attributable (sensor/model identity), Legible (structured JSON), Contemporaneous (ntp_ts), Original (input_data_hash), Accurate (weight_sha256, calibration_ts), plus Complete, Consistent, Enduring, and Available (covered by the WORM storage connector).
Ingest Service: Sync and Async Paths
edgesentry-rs provides two orchestration service types for cloud-side ingest, selectable by feature flag:
| Type | Feature flag | Thread model | Suitable for |
|---|---|---|---|
IngestService | (always available) | Blocking / sync | Embedded, CLI tools, embedded runtimes |
AsyncIngestService | async-ingest | async/await (tokio) | HTTP servers, async pipelines |
Sync path (IngestService)
The synchronous service is the default and requires no additional features. S3 writes (when s3 feature is active) are performed by block_on-ing inside an embedded tokio::runtime::Runtime. This is appropriate for single-threaded tools and embedded environments.
#![allow(unused)]
fn main() {
let mut svc = IngestService::new(policy, raw_store, ledger, op_log);
svc.register_device("lift-01", verifying_key);
svc.ingest(record, payload, None)?;
}
Async path (AsyncIngestService)
Enable with features = ["async-ingest"]. All storage calls use .await so the calling thread is never blocked, enabling high-concurrency pipelines. The policy gate is wrapped in a tokio::sync::Mutex so the service can be shared across tasks via Arc.
#![allow(unused)]
fn main() {
let svc = Arc::new(AsyncIngestService::new(policy, raw_store, ledger, op_log));
svc.register_device("lift-01", verifying_key).await;
svc.ingest(record, payload, None).await?;
}
When s3 and async-ingest are both active, S3CompatibleRawDataStore implements AsyncRawDataStore by calling the AWS SDK future directly — no embedded runtime needed.
Feature flag summary
| Flag | What it adds |
|---|---|
async-ingest | AsyncRawDataStore, AsyncAuditLedger, AsyncOperationLogStore traits; AsyncIngestService; in-memory async stores; tokio (sync + macros) |
s3 | S3CompatibleRawDataStore (sync); when combined with async-ingest, also implements AsyncRawDataStore |
postgres | PostgresAuditLedger, PostgresOperationLog (sync) |
transport-http | transport::http::serve() — axum-based POST /api/v1/ingest server; eds serve CLI subcommand |
transport-mqtt | transport::mqtt::serve_mqtt() — async rumqttc event loop; subscribes to a topic, routes records through AsyncIngestService, publishes accept/reject responses |
Transport Layer
The transport module provides network-facing ingest endpoints built on top of AsyncIngestService.
HTTP (transport-http feature)
Enable with features = ["transport-http"]. This brings in axum 0.8 and exposes a single POST /api/v1/ingest endpoint.
Request / Response
| Field | Type | Description |
|---|---|---|
record | AuditRecord (JSON) | The signed audit record from the device |
raw_payload_hex | String | Hex-encoded raw payload bytes |
| Status | Meaning |
|---|---|
202 Accepted | Record passed all checks and was stored |
400 Bad Request | raw_payload_hex is not valid hex |
403 Forbidden | Client IP is not in the NetworkPolicy allowlist |
422 Unprocessable Entity | Record failed signature, hash, or chain verification |
Usage
#![allow(unused)]
fn main() {
use edgesentry_rs::{
AsyncIngestService, AsyncInMemoryRawDataStore, AsyncInMemoryAuditLedger,
AsyncInMemoryOperationLog, IntegrityPolicyGate, NetworkPolicy,
};
use edgesentry_rs::transport::http::serve;
let mut policy = IntegrityPolicyGate::new();
policy.register_device("lift-01", verifying_key);
let mut network_policy = NetworkPolicy::new();
network_policy.allow_cidr("10.0.0.0/8").unwrap();
let service = AsyncIngestService::new(
policy,
AsyncInMemoryRawDataStore::default(),
AsyncInMemoryAuditLedger::default(),
AsyncInMemoryOperationLog::default(),
);
let addr = "0.0.0.0:8080".parse().unwrap();
serve(service, network_policy, addr).await?;
}
CLI
eds serve \
--addr 0.0.0.0:8080 \
--allowed-sources 10.0.0.0/8,127.0.0.1 \
--device lift-01=<pubkey_hex>
MQTT (transport-mqtt feature)
Enable with features = ["transport-mqtt"]. This brings in rumqttc and exposes serve_mqtt() — a fully async event loop that connects to an MQTT broker, subscribes to a configurable ingest topic, and routes every incoming message through AsyncIngestService.
The message format is the same JSON envelope used by the HTTP transport:
{ "record": { "device_id": "...", "sequence": 1, ... }, "raw_payload_hex": "deadbeef..." }
Accept / reject outcomes are published on <topic>/response:
{ "device_id": "...", "sequence": 1, "status": "accepted" }
{ "device_id": "...", "sequence": 1, "status": "rejected", "error": "..." }
Usage
#![allow(unused)]
fn main() {
use edgesentry_rs::transport::mqtt::{MqttIngestConfig, serve_mqtt};
use edgesentry_rs::{
AsyncIngestService, AsyncInMemoryRawDataStore, AsyncInMemoryAuditLedger,
AsyncInMemoryOperationLog, IntegrityPolicyGate,
};
let service = AsyncIngestService::new(
IntegrityPolicyGate::new(),
AsyncInMemoryRawDataStore::default(),
AsyncInMemoryAuditLedger::default(),
AsyncInMemoryOperationLog::default(),
);
let config = MqttIngestConfig::new("mqtt.example.com", "devices/+/ingest", "edgesentry-cloud");
serve_mqtt(config, service).await?;
}
serve_mqtt runs until the broker connection is lost, returning MqttServeError::EventLoop. Wrap the call in a retry loop for automatic reconnection.
Key behaviors
| Behavior | Detail |
|---|---|
| Malformed JSON | Message is logged and discarded; event loop continues |
| Invalid hex payload | Message is logged and discarded; event loop continues |
| Ingest rejection | Response published on <topic>/response with "status": "rejected" |
| Response publish failure | Logged as a warning; does not stop the event loop |
Library Usage Example
Run the end-to-end lift inspection example implemented directly with library APIs:
Prerequisites:
- Rust toolchain (
cargo) - PostgreSQL / MinIO are not required for this example (it uses in-memory stores)
cargo run -p edgesentry-rs --example lift_inspection_flow
Scenario covered by the sample:
- Register one lift device public key in
IntegrityPolicyGate - Generate three signed inspection records with
build_signed_record - Ingest all records via
IngestService(accepted path) - Tamper one record (
payload_hash) and confirm rejection - Print stored audit records and operation logs
What it demonstrates:
- Record signing with
edgesentry_rs::build_signed_record - Ingestion verification with
edgesentry_rs::ingest::IngestService - Tampering rejection (modified
payload_hash) - Audit records and operation-log output
Source:
crates/edgesentry-rs/examples/lift_inspection_flow.rs
Three-Role Distributed Demo
For a more realistic view of the edge-to-cloud flow, three separate examples can be run in sequence. Each example owns exactly one role:
| Example | Role | External deps |
|---|---|---|
edge_device | Signs records, writes /tmp/eds_*.json | None |
edge_gateway | Routes records, no crypto verification | None |
cloud_backend | NetworkPolicy + IngestService + storage | None (in-memory) or PostgreSQL + MinIO (--features s3,postgres) |
Run in order:
cargo run -p edgesentry-rs --example edge_device
cargo run -p edgesentry-rs --example edge_gateway
cargo run -p edgesentry-rs --example cloud_backend
Each example reads the output files of the previous one from /tmp/. The full sequence with real backends (requires Docker — see Interactive Demo):
cargo run -p edgesentry-rs --example edge_device
cargo run -p edgesentry-rs --example edge_gateway
cargo run -p edgesentry-rs --features s3,postgres --example cloud_backend
What the sequence demonstrates:
edge_device— device-side signing withbuild_signed_record; tampered copy written for rejection demoedge_gateway— gateway receives records but does NOT verify signatures (routing-only responsibility)cloud_backend—NetworkPolicy::checkruns before everyIngestService::ingest; accepted and rejected records both visible
Sources:
crates/edgesentry-rs/examples/edge_device.rscrates/edgesentry-rs/examples/edge_gateway.rscrates/edgesentry-rs/examples/cloud_backend.rs
S3 / MinIO Switching
edgesentry-rs supports a switchable S3-compatible raw-data backend behind the s3 feature.
S3Backend::AwsS3: use AWS S3 (default AWS credential chain, or optional static key)S3Backend::Minio: use MinIO (custom endpoint + static access key/secret)
The ingest layer is coded against a common raw-data storage abstraction, while concrete configuration selects AWS S3 or MinIO without changing ingest business logic.
Use these types from edgesentry_rs:
S3ObjectStoreConfig::for_aws_s3(...)S3ObjectStoreConfig::for_minio(...)S3CompatibleRawDataStore::new(config)
Build and test with the S3 feature enabled:
cargo test -p edgesentry-rs --features s3
To run the S3 integration tests against a live MinIO instance, set the environment variables and run the dedicated test file:
TEST_S3_ENDPOINT=http://localhost:9000 \
TEST_S3_ACCESS_KEY=minioadmin \
TEST_S3_SECRET_KEY=minioadmin \
TEST_S3_BUCKET=bucket \
cargo test -p edgesentry-rs --features s3 --test integration -- --nocapture
Tests skip automatically when any of the four TEST_S3_* variables are unset.
Interactive Local Demo
Note: unlike the library-only example, this demo requires PostgreSQL and MinIO.
Three-role model
EdgeSentry-RS is designed around three distinct roles. Understanding which role each step belongs to is key to reading the demo output correctly.
| Role | Responsibility | In this demo |
|---|---|---|
| Edge device | Signs inspection records with an Ed25519 private key and emits them toward the cloud | examples/edge_device.rs |
| Edge gateway | Forwards signed records from the device to the cloud over HTTPS / MQTT; does not verify content | examples/edge_gateway.rs — HTTP transport is out of scope; files on disk simulate the transport |
| Cloud backend | Enforces NetworkPolicy (CLS-06), runs IntegrityPolicyGate (route identity → signature → sequence → hash-chain), and persists accepted records | examples/cloud_backend.rs with --features s3,postgres |
What this demo does
The script starts Docker services and then runs the three role examples in sequence:
| Step | Role | What happens |
|---|---|---|
| 1–3 | Infrastructure | Start PostgreSQL + MinIO via Docker Compose; wait for health checks |
| 4 | Edge device | edge_device — sign 3 records, write /tmp/eds_*.json |
| 5 | Edge gateway | edge_gateway — read device output, forward unchanged to /tmp/eds_fwd_*.json |
| 6 | Cloud backend | cloud_backend — NetworkPolicy check → IngestService → PostgreSQL + MinIO; also shows tamper rejection |
| 7 | Cloud backend | Query persisted audit records and operation log from PostgreSQL |
| 8 | Infrastructure | Stop Docker services |
Prerequisites:
- Docker / Docker Compose
- Rust toolchain (
cargo)
Run end-to-end demo:
bash scripts/local_demo.sh
The script pauses after each step and waits for Enter (or OK) before proceeding.
At the end of the flow, it runs a shutdown step (docker compose -f docker-compose.local.yml down).
Running individual role examples
Each example can also be run standalone without Docker (using in-memory storage for the cloud backend):
# Step 1: edge device signs records
cargo run -p edgesentry-rs --example edge_device
# Step 2: edge gateway forwards records
cargo run -p edgesentry-rs --example edge_gateway
# Step 3a: cloud backend (in-memory — no Docker required)
cargo run -p edgesentry-rs --example cloud_backend
# Step 3b: cloud backend (PostgreSQL + MinIO — requires Docker)
cargo run -p edgesentry-rs --features s3,postgres --example cloud_backend
Each example reads the output files of the previous one from /tmp/. Run them in order.
Manual inspection
Connect to PostgreSQL after step 6:
docker exec -it edgesentry-rs-postgres psql -U trace -d trace_audit
Inside psql:
SELECT id, device_id, sequence, object_ref, ingested_at FROM audit_records ORDER BY sequence;
SELECT id, decision, device_id, sequence, message, created_at FROM operation_logs ORDER BY id;
MinIO endpoints:
- API:
http://localhost:9000 - Console:
http://localhost:9001 - Default credentials:
minioadmin / minioadmin - Bucket created by setup container:
bucket
Manual stop local backend (only if you abort the script midway):
docker compose -f docker-compose.local.yml down
Next steps
Ready to move beyond the local demo? See the Production Deployment Guide for TLS certificate management, PostgreSQL tuning, S3/MinIO lifecycle rules, systemd service units, and horizontal scaling.
Production Deployment Guide
This guide covers moving from the local Docker Compose demo to a production-grade deployment of eds serve (HTTP/TLS) and eds serve-mqtt. For the local quickstart, see Interactive Demo. For observability, alerting, and backup/restore procedures, see Operations Runbook.
Prerequisites
| Component | Minimum version | Notes |
|---|---|---|
| edgesentry-rs binary | current main | Built with --features transport-http,transport-tls for HTTPS; add transport-mqtt for MQTT |
| PostgreSQL | 14 | Audit ledger and operation log |
| S3-compatible store | — | AWS S3, MinIO ≥ RELEASE.2023, or Cloudflare R2 |
| (Optional) MQTT broker | Mosquitto ≥ 2.0 | Required only for eds serve-mqtt |
1 — TLS Certificate Management
1.1 Provisioning with Let’s Encrypt (recommended)
# Install certbot
apt install certbot
# Issue a certificate for the ingest endpoint
certbot certonly --standalone \
-d ingest.example.com \
--agree-tos --non-interactive \
-m ops@example.com
# Certificates are written to:
# /etc/letsencrypt/live/ingest.example.com/fullchain.pem (cert + chain)
# /etc/letsencrypt/live/ingest.example.com/privkey.pem (private key)
1.2 Starting eds serve-tls with TLS
eds serve-tls \
--addr 0.0.0.0:8443 \
--tls-cert /etc/letsencrypt/live/ingest.example.com/fullchain.pem \
--tls-key /etc/letsencrypt/live/ingest.example.com/privkey.pem \
--allowed-sources 10.0.0.0/8 \
--device lift-01=<PUBLIC_KEY_HEX>
eds serve-tls enforces TLS 1.2 minimum and TLS 1.3 preferred via rustls. No extra configuration is needed.
1.3 Certificate rotation (zero-downtime)
eds serve-tls reads the certificate files at startup only. For rotation without downtime:
# 1. Renew the certificate
certbot renew --quiet
# 2. Send SIGTERM to the running process (systemd handles restart)
systemctl reload edgesentry
# — or, without systemd —
kill -TERM $(pidof eds)
# Process exits cleanly; supervisor / systemd restarts it and picks up the new cert
Add a cron/systemd timer to automate renewal:
# /etc/systemd/system/certbot.timer
[Timer]
OnCalendar=weekly
Persistent=true
[Install]
WantedBy=timers.target
systemctl enable --now certbot.timer
1.4 Self-signed certificates (internal / air-gapped deployments)
# Generate a 10-year self-signed certificate
openssl req -x509 -newkey rsa:4096 -sha256 -days 3650 \
-nodes -keyout server.key -out server.crt \
-subj "/CN=ingest.internal" \
-addext "subjectAltName=IP:10.0.1.5,DNS:ingest.internal"
Distribute server.crt to all edge devices as the trusted CA.
2 — PostgreSQL: Schema, Indexes, and Connection Sizing
2.1 Schema migration
The schema is in db/init/001_schema.sql. Apply it against your production database:
psql "$DATABASE_URL" -f db/init/001_schema.sql
The schema is idempotent (CREATE TABLE IF NOT EXISTS) and safe to re-run.
2.2 Recommended indexes
The base schema ships with a UNIQUE (device_id, sequence) constraint which doubles as a B-tree index and rejects replay attacks at the database level. Add the following indexes for common query patterns:
-- Fast lookup of the latest record per device (chain-head queries)
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_audit_device_seq
ON audit_records (device_id, sequence DESC);
-- Time-range queries for compliance reporting
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_audit_ingested_at
ON audit_records (ingested_at);
-- Operation log filtering by decision type
CREATE INDEX CONCURRENTLY IF NOT EXISTS idx_oplog_decision_device
ON operation_logs (decision, device_id, created_at DESC);
CONCURRENTLY means these can be created without locking the table in production.
2.3 Connection pool sizing
PostgresAuditLedger and PostgresOperationLog each open one synchronous connection via the postgres crate. For multi-node deployments (see §5) each eds process holds two connections. Set max_connections in postgresql.conf to accommodate:
max_connections = 2 × <number of eds instances> + 10 # headroom for psql, monitoring
For high ingest rates (> 500 records/s), replace the sync backends with an async connection pool (e.g. sqlx + PgPool) as a custom AsyncAuditLedger implementation.
2.4 Partitioning for long-term retention
Partition audit_records by ingested_at when the table is expected to exceed 100 M rows:
-- Convert to range-partitioned table (run once, before data accumulates)
CREATE TABLE audit_records_new (LIKE audit_records INCLUDING ALL)
PARTITION BY RANGE (ingested_at);
CREATE TABLE audit_records_2026_q1
PARTITION OF audit_records_new
FOR VALUES FROM ('2026-01-01') TO ('2026-04-01');
-- Attach, swap, drop
ALTER TABLE audit_records RENAME TO audit_records_old;
ALTER TABLE audit_records_new RENAME TO audit_records;
DROP TABLE audit_records_old;
3 — Object Storage: Bucket Policy and Lifecycle Rules
3.1 AWS S3 — bucket policy (least privilege)
Create a dedicated IAM role for the ingest service with write-only access:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "IngestWriteOnly",
"Effect": "Allow",
"Action": ["s3:PutObject"],
"Resource": "arn:aws:s3:::edgesentry-audit/*"
},
{
"Sid": "ListBucket",
"Effect": "Allow",
"Action": ["s3:ListBucket"],
"Resource": "arn:aws:s3:::edgesentry-audit"
}
]
}
Attach a separate read-only role to compliance auditors.
3.2 Lifecycle rules (retention + cost management)
{
"Rules": [
{
"Id": "TransitionToIA",
"Status": "Enabled",
"Filter": { "Prefix": "" },
"Transitions": [
{ "Days": 90, "StorageClass": "STANDARD_IA" },
{ "Days": 365, "StorageClass": "GLACIER_IR" }
]
},
{
"Id": "ExpireOldObjects",
"Status": "Enabled",
"Filter": { "Prefix": "" },
"Expiration": { "Days": 2555 }
}
]
}
Apply via CLI:
aws s3api put-bucket-lifecycle-configuration \
--bucket edgesentry-audit \
--lifecycle-configuration file://lifecycle.json
3.3 MinIO (on-premises)
# Create bucket with object locking (immutability for compliance)
mc mb --with-lock minio/edgesentry-audit
# Set lifecycle: transition to cheaper tier after 90 days
mc ilm import minio/edgesentry-audit <<EOF
{
"Rules": [{
"ID": "expire-3-years",
"Status": "Enabled",
"Expiration": { "Days": 1095 }
}]
}
EOF
# Server-side encryption at rest
mc encrypt set sse-s3 minio/edgesentry-audit
4 — Process Management
4.1 systemd service unit (HTTP + TLS)
# /etc/systemd/system/edgesentry.service
[Unit]
Description=EdgeSentry-RS ingest server
After=network-online.target postgresql.service
Wants=network-online.target
[Service]
Type=exec
User=edgesentry
Group=edgesentry
ExecStart=/usr/local/bin/eds serve-tls \
--addr 0.0.0.0:8443 \
--tls-cert /etc/edgesentry/server.crt \
--tls-key /etc/edgesentry/server.key \
--allowed-sources 10.0.0.0/8 \
--device lift-01=<PUBLIC_KEY_HEX>
Restart=on-failure
RestartSec=5
Environment=RUST_LOG=edgesentry_rs=info
# Security hardening
NoNewPrivileges=true
ProtectSystem=strict
ProtectHome=true
ReadWritePaths=/var/log/edgesentry
PrivateTmp=true
CapabilityBoundingSet=
[Install]
WantedBy=multi-user.target
# Install and start
install -m 755 target/release/eds /usr/local/bin/eds
useradd --system --no-create-home edgesentry
mkdir -p /var/log/edgesentry && chown edgesentry:edgesentry /var/log/edgesentry
systemctl daemon-reload
systemctl enable --now edgesentry
systemctl status edgesentry
4.2 systemd service unit (MQTT)
# /etc/systemd/system/edgesentry-mqtt.service
[Unit]
Description=EdgeSentry-RS MQTT ingest subscriber
After=network-online.target mosquitto.service
Wants=network-online.target
[Service]
Type=exec
User=edgesentry
Group=edgesentry
ExecStart=/usr/local/bin/eds serve-mqtt \
--broker 10.0.1.10 \
--port 1883 \
--topic edgesentry/ingest \
--client-id eds-prod-1 \
--device lift-01=<PUBLIC_KEY_HEX>
Restart=on-failure
RestartSec=10
Environment=RUST_LOG=edgesentry_rs=info
[Install]
WantedBy=multi-user.target
4.3 Health check
eds serve does not expose a /health endpoint itself — wire a TCP check in your load balancer or monitoring agent:
# Confirm the TLS port is accepting connections
openssl s_client -connect ingest.example.com:8443 -verify_return_error </dev/null
echo $? # 0 = healthy
For Kubernetes, use a tcpSocket liveness probe:
livenessProbe:
tcpSocket:
port: 8443
initialDelaySeconds: 5
periodSeconds: 15
5 — Horizontal Scaling
5.1 Architecture
┌─────────────────┐
Edge devices ──TLS──►│ Load balancer │
│ (e.g. nginx / │
│ AWS ALB) │
└────────┬────────┘
│ Round-robin
┌──────────────┼──────────────┐
▼ ▼ ▼
┌────────────┐ ┌────────────┐ ┌────────────┐
│ eds serve │ │ eds serve │ │ eds serve │
│ node 1 │ │ node 2 │ │ node 3 │
└──────┬─────┘ └──────┬─────┘ └──────┬─────┘
└──────────────┼──────────────┘
│
┌──────────────┼──────────────┐
▼ ▼ ▼
┌─────────┐ ┌──────────┐ ┌─────────┐
│Postgres │ │ S3 / │ │ MinIO │
│(primary)│ │ bucket │ │ cluster │
└─────────┘ └──────────┘ └─────────┘
5.2 Key properties
IngestStateis per-process. Eacheds servenode maintains its own in-memory sequence/hash-chain state. TheUNIQUE (device_id, sequence)constraint in PostgreSQL is the cross-node replay fence — a duplicate insert raises a unique-violation error thatPostgresAuditLedgersurfaces as a store error, causing the ingest to be rejected and logged.- No sticky sessions required. Sequence enforcement happens at the DB level; any node can handle any device’s request.
- S3/MinIO writes are stateless. All nodes write to the same bucket; object keys are derived from
object_ref, which is set by the edge device and globally unique by convention (e.g.<device_id>/<sequence>.bin).
5.3 nginx TLS termination + upstream proxy
upstream edgesentry_nodes {
least_conn;
server 10.0.1.11:8080;
server 10.0.1.12:8080;
server 10.0.1.13:8080;
}
server {
listen 443 ssl;
server_name ingest.example.com;
ssl_certificate /etc/letsencrypt/live/ingest.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/ingest.example.com/privkey.pem;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers HIGH:!aNULL:!MD5;
location /api/v1/ingest {
proxy_pass http://edgesentry_nodes;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_read_timeout 10s;
}
}
Run eds serve on each node (plain HTTP on a private port) and let nginx handle TLS termination. Pass --allowed-sources with the nginx upstream IP range. Use eds serve-tls instead if you prefer built-in TLS without a reverse proxy.
Note: When TLS is terminated at the load balancer,
eds servesees the LB’s IP rather than the device’s IP. Set--allowed-sourcesto the LB’s internal address range, and rely on the LB’s own allowlist for per-device source control.
5.4 PostgreSQL read replica for reporting
Write path (ingest): primary only. Read path (compliance queries, chain verification): direct to read replica.
# Read replica connection for compliance tooling
psql "postgres://audit_ro:pass@pg-replica:5432/audit?sslmode=require"
6 — Observability
Structured logging and tracing are handled by the tracing facade. See the Operations Runbook — Observability section for the full setup including JSON log format, structured event fields emitted by the library, Prometheus metric derivation, and OpenTelemetry span configuration.
Quick-start: JSON logs to stdout (for Loki / CloudWatch)
# Cargo.toml of your binary wrapper
tracing-subscriber = { version = "0.3", features = ["env-filter", "json"] }
# Run eds with JSON logs
RUST_LOG=edgesentry_rs=info eds serve ... 2>&1 | \
promtail --stdin --client.url http://loki:3100/loki/api/v1/push
Key log fields to alert on
| Field | Value | Alert condition |
|---|---|---|
message | "MQTT record rejected" / "record rejected" | Rejection rate > 1 % over 5 min |
reason | "invalid signature" | Any occurrence — possible tamper attempt |
reason | "unknown device" | Sustained — unregistered device probing |
message | "MQTT event loop error" | Any — broker connectivity lost |
See Operations Runbook — Alert Definitions for Prometheus alerting rules.
See Also
- Interactive Demo — local Docker Compose quickstart
- Key Management — device key provisioning and rotation
- Operations Runbook — observability, backup, restore, failure drills
- CLI Reference — full flag reference for
eds serve,eds serve-mqtt, and all subcommands
CLI Reference
eds is the unified EdgeSentry CLI. All audit commands live under the eds audit subcommand; scan inspection commands live under eds inspect.
eds audit <command> — tamper-evident audit record operations
eds inspect <command> — 3D scan vs. IFC deviation and AI detection pipeline
Installation
For end users — Homebrew (macOS / Linux)
brew install edgesentry/tap/eds
For end users — pre-built binary
Download the latest release from the GitHub Releases page.
| Platform | File |
|---|---|
| Linux (x86-64) | eds-{version}-x86_64-unknown-linux-gnu.tar.gz |
| macOS (Apple Silicon) | eds-{version}-aarch64-apple-darwin.tar.gz |
| Windows (x86-64) | eds-{version}-x86_64-pc-windows-msvc.zip |
Extract and place the eds binary on your PATH:
# Linux / macOS
tar -xzf eds-{version}-{target}.tar.gz
sudo mv eds /usr/local/bin/
eds --help
# Windows (PowerShell)
Expand-Archive eds-{version}-x86_64-pc-windows-msvc.zip
# Move eds.exe to a directory in your PATH
eds --help
For developers — install from source
Requires Rust (stable toolchain).
cargo install --git https://github.com/edgesentry/edgesentry-rs --locked --bin eds
To include optional transport features at install time:
cargo install --git https://github.com/edgesentry/edgesentry-rs --locked --bin eds \
--features transport-http,transport-tls
Verify the installation:
eds --version
eds --help
Device Provisioning
Generate a fresh Ed25519 keypair for a new device:
eds audit keygen
Save directly to a file:
eds audit keygen --out device-lift-01.key.json
Derive the public key from an existing private key:
eds audit inspect-key \
--private-key-hex 0101010101010101010101010101010101010101010101010101010101010101
See Key Management for the full provisioning and rotation workflow.
CLI Usage
Show help:
eds --help
eds audit --help
Create a signed record and save it to record1.json:
eds audit sign-record \
--device-id lift-01 \
--sequence 1 \
--timestamp-ms 1700000000000 \
--payload "door-open" \
--object-ref "s3://bucket/lift-01/1.bin" \
--private-key-hex 0101010101010101010101010101010101010101010101010101010101010101 \
--out record1.json
Verify one record signature:
eds audit verify-record \
--record-file record1.json \
--public-key-hex 8a88e3dd7409f195fd52db2d3cba5d72ca6709bf1d94121bf3748801b40f6f5c
Verify a whole chain from a JSON array file:
eds audit verify-chain --records-file records.json
Lift Inspection Scenario (CLI End-to-End)
This scenario simulates a remote lift inspection with three checks:
- Door open/close cycle check
- Vibration check
- Emergency brake response check
1) Generate a full signed chain for one inspection session
eds audit demo-lift-inspection \
--device-id lift-01 \
--out-file lift_inspection_records.json
Expected output:
DEMO_CREATED:lift_inspection_records.json
CHAIN_VALID
2) Verify chain integrity from file
eds audit verify-chain --records-file lift_inspection_records.json
Expected output:
CHAIN_VALID
2.1) Tamper with the chain file and confirm detection
Modify the first record hash value in-place:
python3 - <<'PY'
import json
path = "lift_inspection_records.json"
with open(path, "r", encoding="utf-8") as f:
records = json.load(f)
records[0]["payload_hash"][0] ^= 0x01
with open(path, "w", encoding="utf-8") as f:
json.dump(records, f, indent=2)
print("tampered", path)
PY
Run chain verification again:
eds audit verify-chain --records-file lift_inspection_records.json
Expected result: command exits with a non-zero code and prints an error such as chain verification failed: invalid previous hash ....
3) Create and verify a single signed inspection event
Generate one signed event:
eds audit sign-record \
--device-id lift-01 \
--sequence 1 \
--timestamp-ms 1700000000000 \
--payload "scenario=lift-inspection,check=door,status=ok" \
--object-ref "s3://bucket/lift-01/door-check-1.bin" \
--private-key-hex 0101010101010101010101010101010101010101010101010101010101010101 \
--out lift_single_record.json
Verify signature:
eds audit verify-record \
--record-file lift_single_record.json \
--public-key-hex 8a88e3dd7409f195fd52db2d3cba5d72ca6709bf1d94121bf3748801b40f6f5c
Expected output:
VALID
3.1) Tamper with a single record signature and confirm rejection
Modify one signature byte:
python3 - <<'PY'
import json
path = "lift_single_record.json"
with open(path, "r", encoding="utf-8") as f:
record = json.load(f)
record["signature"][0] ^= 0x01
with open(path, "w", encoding="utf-8") as f:
json.dump(record, f, indent=2)
print("tampered", path)
PY
Verify signature again:
eds audit verify-record \
--record-file lift_single_record.json \
--public-key-hex 8a88e3dd7409f195fd52db2d3cba5d72ca6709bf1d94121bf3748801b40f6f5c
Expected output:
INVALID
Server Commands
eds audit serve — HTTP ingest server
Requires the transport-http Cargo feature.
| Flag | Default | Description |
|---|---|---|
--addr | 0.0.0.0:8080 | Socket address to bind |
--allowed-sources | 127.0.0.1 | Comma-separated CIDRs / IPs allowed to connect |
--device ID=PUBKEY_HEX | (none) | Register a device; repeat for multiple devices |
eds audit serve \
--addr 0.0.0.0:8080 \
--allowed-sources 10.0.0.0/8 \
--device lift-01=<PUBLIC_KEY_HEX>
Plain HTTP on port 8080. Use behind a TLS-terminating reverse proxy, or use eds audit serve-tls for built-in TLS.
eds audit serve-tls — HTTPS ingest server (TLS 1.2/1.3)
Requires the transport-tls Cargo feature.
| Flag | Default | Description |
|---|---|---|
--addr | 0.0.0.0:8443 | Socket address to bind |
--allowed-sources | 127.0.0.1 | Comma-separated CIDRs / IPs allowed to connect |
--device ID=PUBKEY_HEX | (none) | Register a device; repeat for multiple devices |
--tls-cert | (required) | Path to PEM certificate chain (leaf first) |
--tls-key | (required) | Path to PEM private key (PKCS #8 or PKCS #1 RSA) |
eds audit serve-tls \
--addr 0.0.0.0:8443 \
--allowed-sources 10.0.0.0/8 \
--device lift-01=<PUBLIC_KEY_HEX> \
--tls-cert /etc/edgesentry/server.crt \
--tls-key /etc/edgesentry/server.key
Uses rustls TLS 1.2/1.3. Network policy (IP allowlist) is enforced at TCP accept time, before the TLS handshake.
eds audit serve-mqtt — MQTT ingest subscriber
Requires the transport-mqtt Cargo feature. Optionally add transport-mqtt-tls for MQTTS.
| Flag | Default | Description |
|---|---|---|
--broker | localhost | MQTT broker host |
--port | 1883 | MQTT broker port (use 8883 for MQTTS) |
--topic | edgesentry/ingest | Topic to subscribe for ingest records |
--client-id | eds-server | MQTT client identifier |
--device ID=PUBKEY_HEX | (none) | Register a device; repeat for multiple devices |
--tls-ca-cert | (none) | Path to PEM CA cert for MQTTS broker verification (transport-mqtt-tls only) |
# Plain MQTT (port 1883)
eds audit serve-mqtt \
--broker broker.example.com \
--port 1883 \
--topic edgesentry/ingest \
--device lift-01=<PUBLIC_KEY_HEX>
# MQTTS (port 8883, requires transport-mqtt-tls feature)
eds audit serve-mqtt \
--broker broker.example.com \
--port 8883 \
--tls-ca-cert /etc/edgesentry/ca.crt \
--device lift-01=<PUBLIC_KEY_HEX>
Responses are published on <topic>/response as JSON with status: "accepted" or status: "rejected".
Ingestion Demo (PostgreSQL + MinIO)
Requires the s3 and postgres Cargo features and a running PostgreSQL + MinIO instance (use docker compose -f docker-compose.local.yml up -d).
1) Generate a chain with payloads file
eds audit demo-lift-inspection \
--device-id lift-01 \
--out-file lift_inspection_records.json \
--payloads-file lift_inspection_payloads.json
2) Ingest records through IngestService
eds audit demo-ingest \
--records-file lift_inspection_records.json \
--payloads-file lift_inspection_payloads.json \
--device-id lift-01 \
--pg-url postgresql://trace:trace@localhost:5433/trace_audit \
--minio-endpoint http://localhost:9000 \
--minio-bucket bucket \
--minio-access-key minioadmin \
--minio-secret-key minioadmin \
--reset
--reset truncates audit_records and operation_logs before ingesting. Omit it to append to an existing run.
Pass --tampered-records-file <path> to also demonstrate rejection of a tampered chain through the same IngestService.
See Interactive Demo for the full guided walkthrough with PostgreSQL and MinIO.
Key Management
This page covers the full lifecycle of Ed25519 device keys used by EdgeSentry-RS: key generation, secure storage, public key registration, and rotation.
Relevant standards: Singapore CLS-04 / ETSI EN 303 645 §5.4 / JC-STAR STAR-1 R1.2.
1. Key Generation
Generate a fresh Ed25519 keypair with the eds CLI:
eds audit keygen
Example output:
{
"private_key_hex": "ddca9848801c658d62a010c4d306d6430a0cdc2c383add1628859258e3acfb93",
"public_key_hex": "4bb158f302c0ad9261c0acfa95e17144ae7249eb0973bbfaeae4501165887a77"
}
Save to a file:
eds audit keygen --out device-lift-01.key.json
Each device must have a unique keypair. Never reuse keys across devices.
2. Deriving the Public Key from an Existing Private Key
If you already have a private_key_hex and need to confirm the matching public key:
eds audit inspect-key --private-key-hex <64-hex-char-private-key>
Example:
eds audit inspect-key \
--private-key-hex 0101010101010101010101010101010101010101010101010101010101010101
Output:
{
"private_key_hex": "0101010101010101010101010101010101010101010101010101010101010101",
"public_key_hex": "8a88e3dd7409f195fd52db2d3cba5d72ca6709bf1d94121bf3748801b40f6f5c"
}
3. Secure Private Key Storage
The private key must be kept secret on the device. Recommended practices:
| Environment | Recommended storage |
|---|---|
| Development / CI | Environment variable (DEVICE_PRIVATE_KEY_HEX) — never commit to version control |
| Production (software) | Encrypted secrets store (e.g., HashiCorp Vault, AWS Secrets Manager, Azure Key Vault) |
| Production (hardware) | Hardware Security Module (HSM) or Trusted Execution Environment (TEE) — see #54 for the planned HSM path |
File-based storage (development only):
chmod 600 device-lift-01.key.json
Never expose private_key_hex in logs, HTTP responses, or error messages.
4. Registering the Public Key (Cloud Side)
After generating a keypair, register the device’s public key in IntegrityPolicyGate
before any records are ingested:
#![allow(unused)]
fn main() {
use edgesentry_rs::{IntegrityPolicyGate, parse_fixed_hex};
use ed25519_dalek::VerifyingKey;
let public_key_bytes = parse_fixed_hex::<32>(&public_key_hex)?;
let verifying_key = VerifyingKey::from_bytes(&public_key_bytes)?;
let mut gate = IntegrityPolicyGate::new();
gate.register_device("lift-01", verifying_key);
}
The device_id string passed to register_device must exactly match the
device_id field in every AuditRecord signed by that device.
Any record from an unknown device_id is rejected with IngestError::UnknownDevice.
5. Key Rotation
Rotate a device key when:
- The private key may have been exposed
- The device is being decommissioned and reprovisioned
- Your security policy requires periodic rotation
Rotation procedure:
-
Generate a new keypair on or for the new device configuration:
eds audit keygen --out device-lift-01-v2.key.json -
Register the new public key alongside the old one (the gate allows multiple keys per
device_idis not yet supported — register under a newdevice_idsuch aslift-01-v2during the transition window). -
Update the device to sign new records with the new private key and the new
device_id. -
Once all in-flight records signed with the old key have been ingested and verified, remove the old device registration from the policy gate.
-
Securely delete or revoke the old private key from all storage locations.
Note: Multi-key-per-device support (allowing old and new keys simultaneously under the same
device_id) is tracked in #57.
6. Software Update Publisher Keys
Software update verification uses a separate set of Ed25519 keys from device signing keys. A publisher key belongs to the entity that signs firmware or software packages; a device signing key belongs to the individual device that signs audit records. Never mix these roles.
6.1 Key generation and storage
Generate a publisher keypair the same way as a device keypair:
eds audit keygen --out publisher-acme-firmware.key.json
The private key must be kept in a high-security offline environment (HSM, air-gapped workstation, or a secrets manager with strict access control). It is used only at build time to sign a release artifact, never on the device itself.
The public key is embedded in the device firmware image at manufacture time and loaded into UpdateVerifier at runtime:
#![allow(unused)]
fn main() {
use edgesentry_rs::update::UpdateVerifier;
use ed25519_dalek::VerifyingKey;
let public_key_bytes: [u8; 32] = /* bytes baked into firmware */;
let verifying_key = VerifyingKey::from_bytes(&public_key_bytes)?;
let mut verifier = UpdateVerifier::new();
verifier.register_publisher("acme-firmware", verifying_key);
}
6.2 One publisher ID per key
Register each key under a distinct publisher_id. Avoid registering the same key under multiple IDs or multiple keys under the same ID unless your threat model explicitly requires it.
#![allow(unused)]
fn main() {
// Correct: one key per publisher
verifier.register_publisher("acme-firmware", firmware_key);
verifier.register_publisher("acme-config", config_key);
// Avoid: same key shared across publishers — a signature from one
// package type could be accepted for the other
verifier.register_publisher("acme-firmware", shared_key); // ⚠
verifier.register_publisher("acme-config", shared_key); // ⚠
}
6.3 Key confusion attacks
A key confusion attack occurs when a signature produced for one package type is submitted as a valid signature for another. UpdateVerifier prevents this because:
- The caller passes an explicit
publisher_idtoverify(). - The verifier looks up the key registered under that exact ID.
- A signature by
acme-config’s key will not verify underacme-firmware’s key.
This only holds when each publisher has a unique key. If keys are shared across publishers (see §6.2), the isolation breaks.
6.4 Publisher key rotation
Rotate a publisher key when the private key may have been exposed or your security policy requires periodic rotation.
- Generate a new keypair offline.
- Sign the next firmware release with the new private key.
- Distribute a firmware update that embeds the new public key and calls
register_publisherwith the new key. Include both old and new keys during the transition window so devices on either firmware version can verify updates. - After all devices have moved to the new firmware, remove the old key registration.
- Securely destroy the old private key.
6.5 FFI (C/C++ devices)
For devices integrating via the C/C++ FFI bridge, publisher key verification will be exposed as eds_verify_update (tracked in #80). Until that function is available, C/C++ devices must call into Rust via a thin wrapper or handle publisher verification at the application layer.
The public key bytes to pass to eds_verify_update are the same 32-byte Ed25519 public key described above — provision them into the device at manufacture time, stored in a read-only flash region or secure element.
7. HSM Path (CLS Level 4)
For CLS Level 4 and high-assurance deployments, private keys should never exist as extractable byte arrays. Instead, signing operations should be performed inside an HSM or TEE, with the private key material never leaving the secure boundary.
The planned edgesentry-bridge C/C++ FFI layer (#53) and HSM integration (#54)
will provide a signing interface that delegates the Ed25519 sign operation to an
HSM-backed provider without exposing the raw key bytes to application code.
C/C++ FFI Bridge
edgesentry-bridge is a separate Rust crate that exposes Ed25519 signing and
BLAKE3 hash-chain verification as a stable C ABI. C and C++ firmware or
gateways can call the same security logic as the Rust library without a full
rewrite.
Building the library
cargo build -p edgesentry-bridge --release
This produces:
| Platform | File |
|---|---|
| macOS | target/release/libedgesentry_bridge.dylib and .a |
| Linux | target/release/libedgesentry_bridge.so and .a |
The header crates/edgesentry-bridge/include/edgesentry_bridge.h is
regenerated automatically by build.rs using cbindgen.
Linking from C/C++
macOS:
cc -o my_app main.c \
-I path/to/edgesentry-bridge/include \
-L path/to/target/release \
-ledgesentry_bridge \
-framework Security -framework CoreFoundation
Linux:
cc -o my_app main.c \
-I path/to/edgesentry-bridge/include \
-L path/to/target/release \
-ledgesentry_bridge \
-lpthread -ldl
A ready-made Makefile is provided in
crates/edgesentry-bridge/examples/c_integration/.
API reference
Error codes
| Constant | Value | Meaning |
|---|---|---|
EDS_OK | 0 | Success |
EDS_ERR_NULL_PTR | -1 | A required pointer was NULL |
EDS_ERR_INVALID_UTF8 | -2 | String argument is not valid UTF-8 |
EDS_ERR_INVALID_KEY | -3 | Key or hash buffer is invalid |
EDS_ERR_STRING_TOO_LONG | -4 | String exceeds fixed buffer size |
EDS_ERR_CHAIN_INVALID | -5 | Hash-chain verification failed |
EDS_ERR_PANIC | -6 | Unexpected internal error |
EDS_ERR_HASH_MISMATCH | -7 | Payload hash does not match expected value |
EDS_ERR_BAD_SIGNATURE | -8 | Ed25519 signature is invalid |
After any call that returns a negative error code, call eds_last_error_message() to retrieve a human-readable description of the failure.
Record struct
typedef struct {
uint64_t sequence; /* monotonic record index (starts at 1) */
uint64_t timestamp_ms; /* Unix epoch in milliseconds */
uint8_t payload_hash[32]; /* BLAKE3 hash of the raw payload */
uint8_t signature[64]; /* Ed25519 signature over payload_hash */
uint8_t prev_record_hash[32]; /* hash of preceding record (zero for first) */
uint8_t device_id[256]; /* null-terminated device identifier */
uint8_t object_ref[512]; /* null-terminated storage reference */
} EdsAuditRecord;
EdsAuditRecord is caller-allocated. Rust never calls malloc or
returns a heap pointer — no _free function is needed.
Functions
/* Generate an Ed25519 keypair via OS CSPRNG.
private_key_out and public_key_out must each point to 32 bytes. */
int32_t eds_keygen(uint8_t *private_key_out, uint8_t *public_key_out);
/* Hash payload with BLAKE3, sign with Ed25519, fill *out.
Pass NULL for prev_record_hash to use the zero hash (first record). */
int32_t eds_sign_record(const char *device_id,
uint64_t sequence,
uint64_t timestamp_ms,
const uint8_t *payload,
size_t payload_len,
const uint8_t *prev_record_hash,
const char *object_ref,
const uint8_t *private_key,
EdsAuditRecord *out);
/* Compute the per-record hash (used as prev_record_hash for the next record).
hash_out must point to 32 bytes. */
int32_t eds_record_hash(const EdsAuditRecord *record, uint8_t *hash_out);
/* Verify Ed25519 signature. Returns 1 valid, 0 invalid, negative on error. */
int32_t eds_verify_record(const EdsAuditRecord *record,
const uint8_t *public_key);
/* Verify the entire hash chain. Returns EDS_OK or EDS_ERR_CHAIN_INVALID. */
int32_t eds_verify_chain(const EdsAuditRecord *records, size_t count);
/* Verify a software update before installation (CLS-03 / STAR-2 R2.2).
Checks BLAKE3(payload) == payload_hash, then verifies the Ed25519
publisher signature over payload_hash.
payload_hash must point to 32 bytes; signature to 64 bytes;
publisher_key to 32 bytes.
Returns EDS_OK, EDS_ERR_HASH_MISMATCH, EDS_ERR_BAD_SIGNATURE, or
EDS_ERR_INVALID_KEY / EDS_ERR_NULL_PTR on bad inputs. */
int32_t eds_verify_update(const uint8_t *payload,
size_t payload_len,
const uint8_t *payload_hash,
const uint8_t *signature,
const uint8_t *publisher_key);
/* Return a thread-local human-readable description of the last error.
The pointer is valid until the next eds_* call on this thread.
Returns "" when no error has occurred. Never returns NULL. */
const char *eds_last_error_message(void);
Minimal C example
#include "edgesentry_bridge.h"
#include <string.h>
#include <assert.h>
int main(void) {
uint8_t priv_key[32], pub_key[32];
if (eds_keygen(priv_key, pub_key) != EDS_OK) {
fprintf(stderr, "keygen failed: %s\n", eds_last_error_message());
return 1;
}
const char *payload = "check=door,status=ok";
EdsAuditRecord rec;
memset(&rec, 0, sizeof(rec));
int rc = eds_sign_record("lift-01", 1, 1700000000000ULL,
(const uint8_t *)payload, strlen(payload),
NULL, /* zero hash — first record */
"lift-01/1.bin",
priv_key, &rec);
if (rc != EDS_OK) {
fprintf(stderr, "sign_record failed: %s\n", eds_last_error_message());
return 1;
}
assert(eds_verify_record(&rec, pub_key) == 1);
return 0;
}
See the full example in
crates/edgesentry-bridge/examples/c_integration/main.c.
Memory safety conventions
| Rule | Detail |
|---|---|
| No heap allocation | EdsAuditRecord is caller-allocated; Rust never calls malloc |
| NULL-checked | Every pointer argument is checked; EDS_ERR_NULL_PTR returned on failure |
| Fixed-size strings | device_id max 255 chars; object_ref max 511 chars — truncated inputs return EDS_ERR_STRING_TOO_LONG |
| Panic safety | std::panic::catch_unwind wraps every FFI function; a Rust panic returns EDS_ERR_PANIC instead of unwinding across the C boundary |
| Key sizes | private_key and public_key must point to exactly 32 bytes; hash buffers to 32 bytes; signature buffer to 64 bytes |
HSM path
For CLS Level 4, the private key should never exist as an extractable byte
array. The planned HSM integration (#54)
will delegate the eds_sign_record operation to an HSM-backed provider
without exposing key bytes to the caller.
Contributing
Consistency Check
After every change — whether to code, tests, scripts, or docs — check that all three layers stay in sync:
- Code → Docs: If you add, remove, or rename a module, function, CLI command, or behavior, update all docs that reference it (
concepts.md,architecture.md,cli.md,quickstart.md,demo.md,traceability.md). - Docs → Code: If a doc describes a feature or command, verify it exists and works as described. Stale examples and wrong test target names cause CI failures.
- Scripts → Code: If you rename a test file or cargo feature, update every script and workflow that references it (e.g.
scripts/integration_test.sh,.github/workflows/). - Traceability: If you implement or change a compliance control, update the status in
docs/src/traceability.md(✅ / ⚠️ / 🔲).
A quick grep before opening a PR:
# Find docs that mention a symbol you changed
grep -r "<old-name>" docs/ scripts/ .github/
Issue Labels
Every issue should carry one type label, one priority label, and one or more category labels.
Type labels
| Label | When to use |
|---|---|
bug | Something is broken or behaves incorrectly |
enhancement | New feature or improvement to existing behavior |
documentation | Docs-only change — no production code affected |
Priority labels
| Label | Meaning | Examples |
|---|---|---|
priority:P0 | Must-have — directly required to satisfy a target standard (CLS, JC-STAR, CRA). Work is blocked until resolved. | Broken signature verification, missing hash-chain link, failing integrity gate |
priority:P1 | Good-to-have — strengthens compliance posture or developer experience but is not a hard blocker for standard conformance. | Key rotation tooling, CI hardening, traceability matrix, FFI bridge |
priority:P2 | Best-effort — stretch goals, nice-to-haves, or anything that requires dedicated hardware. Pursue if capacity allows. | HSM integration, education white papers, reference architectures |
When in doubt, ask: “Does the standard explicitly require this?” If yes → P0. Otherwise, if it helps but is not mandated → P1. For stretch goals, nice additions, or hardware-dependent work → P2.
Category labels
| Label | When to use |
|---|---|
core | Core security controls — signing, hashing, integrity gate, ingest pipeline |
compliance-governance | Compliance evidence, traceability matrices, disclosure processes |
devsecops | CI/CD pipelines, supply-chain security, static analysis, audit tooling |
platform-operations | Infrastructure, deployment, operational readiness |
hardware-needed | Requires physical hardware or hardware-backed infrastructure (always pair with priority:P2) |
Pull Request Conventions
When creating a pull request, always assign it to the user who authored the branch:
gh pr create --assignee "@me" --title "..." --body "..."
Mandatory: Run Tests After Every Code Change
After every code change, run:
cargo test --workspace
Do not consider a change complete until all tests pass.
Unit Tests
Prerequisites (macOS)
Install the Rust tool chain first:
brew install rustup-init
rustup-init -y
source "$HOME/.cargo/env"
rustup default stable
Install cargo-deny (required for OSS license checks):
cargo install cargo-deny
source "$HOME/.cargo/env"
cargo deny --version
Running Tests
Run all unit tests:
cargo test --workspace
Run tests for a specific crate:
cargo test -p edgesentry-rs
Run the edgesentry-rs crate with the S3-compatible backend feature enabled:
cargo test -p edgesentry-rs --features s3
Run S3 integration tests against a live MinIO instance (requires the env vars below to be set):
TEST_S3_ENDPOINT=http://localhost:9000 \
TEST_S3_ACCESS_KEY=minioadmin \
TEST_S3_SECRET_KEY=minioadmin \
TEST_S3_BUCKET=bucket \
cargo test -p edgesentry-rs --features s3 --test integration -- --nocapture
Tests skip automatically when any of the four TEST_S3_* variables are unset.
Run unit tests + OSS license checks in one command:
./scripts/run_unit_and_license_check.sh
Static Analysis and OSS License Check
Use the following checks before release.
1) Static analysis (clippy)
cargo clippy --workspace --all-targets --all-features -- -D warnings
2) Dependency security advisory check (cargo-audit)
Install once:
cargo install cargo-audit
Run:
cargo audit
3) Commercial-use OSS license check (cargo-deny)
Install once:
cargo install cargo-deny
Run license check (policy in deny.toml):
cargo deny check licenses
Optional full dependency policy check:
cargo deny check advisories bans licenses sources
If this check fails, inspect violating crates and update dependencies or the policy only after legal/security review.
Avoiding Conflicts with Main
Conflicts occur when a feature branch diverges from main while main receives other merged PRs that touch the same files. The highest-conflict files in this repo are scripts/local_demo.sh, docs/src/demo.md, and .github/copilot-instructions.md.
Before starting work
git fetch origin
git checkout main && git pull origin main
git checkout -b <your-branch>
Keep your branch up to date — rebase onto main regularly, especially before opening a PR:
git fetch origin
git rebase origin/main
Resolving a conflict during rebase
- Identify conflicted files:
git diff --name-only --diff-filter=U - For each file, decide which side to keep:
- Take your version:
git checkout --theirs <file> - Take main’s version:
git checkout --ours <file> - Merge manually: edit the file to remove
<<<<<<</=======/>>>>>>>markers
- Take your version:
- Stage the resolved file:
git add <file> - Continue:
GIT_EDITOR=true git rebase --continue - If a conflict recurs on the next commit, repeat from step 1.
After resolving, force-push the rebased branch:
git push --force-with-lease origin <your-branch>
Files most likely to conflict — coordinate before editing these:
| File | Why it conflicts often |
|---|---|
scripts/local_demo.sh | Multiple PRs add steps or restructure the demo flow |
docs/src/demo.md | Mirrors demo script changes |
.github/copilot-instructions.md | Structure section updated whenever new modules or examples are added |
crates/edgesentry-rs/examples/lift_inspection_flow.rs | Touched by both quickstart improvements and role-boundary work |
Build and Release
Build Release Artifacts
cargo build --workspace --release
Build a specific crate only:
cargo build -p edgesentry-rs --release
Publish to crates.io
- Validate quality gates first:
./scripts/run_unit_and_license_check.sh
cargo clippy --workspace --all-targets --all-features -- -D warnings
- Login once:
cargo login <CRATES_IO_TOKEN>
- Dry-run publish:
cargo publish --dry-run -p edgesentry-rs
- Publish:
cargo publish -p edgesentry-rs
GitHub Actions Release Automation (macOS / Windows / Linux)
This repository includes .github/workflows/release.yml.
- Trigger: push a tag like
v0.1.0 - Quality gate: build, unit tests, license check, clippy
- Publish
edgesentry-rsto crates.io - Build
edsbinaries for Linux, macOS (x64 + arm64), and Windows - Upload packaged binaries to GitHub Release assets
Note: .github/workflows/ci.yml runs cargo publish --dry-run for edgesentry-rs.
Required GitHub secret:
CRATES_IO_TOKEN: crates.io API token used bycargo publish
Automatic Version Increment After Merge
This repository also includes .github/workflows/auto-version-tag.yml.
- Trigger: when
CIsucceeds onmain - Action: update
workspace.package.versioninCargo.tomland create/push avX.Y.Ztag - Then:
release.ymlis triggered by that tag and performs the full release pipeline
Version bump rules (Conventional Commits):
fix:-> patch bump (x.y.z->x.y.(z+1))feat:-> minor bump (x.y.z->x.(y+1).0)!orBREAKING CHANGE-> major bump (x.y.z->(x+1).0.0)
Operations Runbook
This page covers observability wiring, alert thresholds, and backup/restore procedures for a production EdgeSentry-RS deployment.
Observability
Structured logging with tracing
EdgeSentry-RS uses the tracing facade. No subscriber is bundled — deployers wire up the backend of their choice at application startup. The library emits zero overhead when no subscriber is registered.
Recommended subscriber for production (JSON over stdout, ingested by Loki / CloudWatch):
# Cargo.toml of the host application
tracing-subscriber = { version = "0.3", features = ["env-filter", "json"] }
use tracing_subscriber::{fmt, EnvFilter};
fn main() {
fmt()
.json()
.with_env_filter(EnvFilter::from_default_env()) // RUST_LOG=edgesentry_rs=info
.init();
// ...
}
Set RUST_LOG=edgesentry_rs=info for production; edgesentry_rs=debug for incident investigation.
Structured log events emitted by the library
All events include the module path as target. Key events:
| Level | Target | Event | Key fields |
|---|---|---|---|
DEBUG | edgesentry_rs::agent | signing record | device_id, sequence, payload_bytes |
DEBUG | edgesentry_rs::ingest::storage | ingest started | device_id, sequence, object_ref, payload_bytes |
WARN | edgesentry_rs::ingest::storage | payload hash mismatch — record rejected | device_id, sequence |
WARN | edgesentry_rs::ingest::storage | integrity policy rejected record | device_id, sequence, reason |
ERROR | edgesentry_rs::ingest::storage | raw data store write failed | device_id, sequence, error |
ERROR | edgesentry_rs::ingest::storage | audit ledger append failed | device_id, sequence, error |
ERROR | edgesentry_rs::ingest::storage | operation log write failed | device_id, sequence, error |
INFO | edgesentry_rs::ingest::storage | record accepted | device_id, sequence, object_ref |
DEBUG | edgesentry_rs::ingest::verify | signature verification failed | device_id, sequence |
DEBUG | edgesentry_rs::ingest::verify | duplicate record rejected | device_id, sequence |
DEBUG | edgesentry_rs::ingest::verify | sequence out of order | device_id, expected, actual |
DEBUG | edgesentry_rs::ingest::verify | prev_record_hash mismatch — chain broken | device_id, sequence |
DEBUG | edgesentry_rs::ingest::verify | record verified and accepted | device_id, sequence |
Recommended Prometheus metrics (derived from logs)
Use a log-to-metrics pipeline (e.g. Promtail + Loki, or Vector) to derive counters from structured log events:
| Metric | How to derive | Alert threshold |
|---|---|---|
edgesentry_ingest_accepted_total | Count INFO "record accepted" events | — |
edgesentry_ingest_rejected_total{reason} | Count WARN rejection events, label by reason field | > 10/min sustained → P1 alert |
edgesentry_ingest_error_total{component} | Count ERROR storage failure events, label by component (raw_data_store / audit_ledger / operation_log) | Any occurrence → P0 alert |
edgesentry_chain_break_total | Count DEBUG "prev_record_hash mismatch" events | Any occurrence → P0 alert |
edgesentry_signature_fail_total | Count DEBUG "signature verification failed" events | > 5/min sustained → P1 alert |
OpenTelemetry (tracing spans)
The IngestService::ingest method emits a tracing span. Wire it to an OTLP exporter for distributed tracing:
opentelemetry = "0.26"
opentelemetry-otlp = { version = "0.26", features = ["grpc-tonic"] }
tracing-opentelemetry = "0.27"
Alert Definitions
| Alert | Condition | Severity | Response |
|---|---|---|---|
IngestStorageError | Any ERROR-level storage failure | P0 | Check DB/S3 connectivity; verify disk and credentials |
ChainBreak | Any prev_record_hash mismatch event | P0 | Investigate tamper or replay; preserve logs before any restart |
HighRejectionRate | Rejection rate > 10/min for 5 min | P1 | Check device firmware; look for misconfigured signing key rotation |
SignatureFailureSurge | Signature failures > 5/min for 5 min | P1 | Possible key compromise or active spoofing attempt |
AuditLedgerLag | Postgres operation_logs insert latency > 2 s p99 | P1 | Check DB query plan; autovacuum contention |
Recovery Objectives
| Objective | Target | Basis |
|---|---|---|
| RTO (recovery time) | < 30 minutes | Time to restore Postgres from pg_basebackup + WAL replay |
| RPO (recovery point) | < 5 minutes | Continuous WAL archiving at 5-minute intervals |
Backup Runbook
PostgreSQL — audit ledger and operation log
Prerequisites: WAL archiving enabled (archive_mode = on, archive_command shipping to S3 or equivalent).
1. Take a base backup
pg_basebackup \
--host=<DB_HOST> \
--username=<DB_USER> \
--pgdata=/backup/pg_base_$(date +%Y%m%d_%H%M%S) \
--format=tar \
--gzip \
--wal-method=stream \
--checkpoint=fast \
--progress
2. Verify the backup
pg_restore --list /backup/pg_base_<timestamp>/base.tar.gz | head -20
3. Archive WAL continuously
Ensure the archive_command in postgresql.conf ships WAL segments to durable storage (e.g. S3):
archive_command = 'aws s3 cp %p s3://<BUCKET>/wal/%f'
4. Retention policy
| Backup type | Retention |
|---|---|
| Base backup | 30 days |
| WAL archive | 30 days |
Logical dump (pg_dump) | 7 days (weekly) |
S3 / MinIO — raw payload store
Enable versioning and cross-region replication on the bucket:
# Enable versioning
aws s3api put-bucket-versioning \
--bucket <BUCKET> \
--versioning-configuration Status=Enabled
# Enable replication (requires a destination bucket and IAM role configured separately)
aws s3api put-bucket-replication \
--bucket <BUCKET> \
--replication-configuration file://replication.json
Minimum replication target: one additional region. For CLS Level 3 evidence integrity, ensure object lock or versioning is enabled so payloads cannot be silently overwritten.
Restore Runbook
PostgreSQL — point-in-time recovery (PITR)
# 1. Stop the Postgres service
systemctl stop postgresql
# 2. Restore base backup
tar -xzf /backup/pg_base_<timestamp>/base.tar.gz -C /var/lib/postgresql/data/
# 3. Create recovery config
cat > /var/lib/postgresql/data/recovery.conf <<EOF
restore_command = 'aws s3 cp s3://<BUCKET>/wal/%f %p'
recovery_target_time = '<TARGET_TIMESTAMP>'
recovery_target_action = 'promote'
EOF
# 4. Start Postgres — it will replay WAL to the target time
systemctl start postgresql
# 5. Verify: query the last accepted sequence per device
psql -U <DB_USER> -d <DB_NAME> \
-c "SELECT device_id, MAX(sequence) FROM audit_records GROUP BY device_id;"
Recovery verification checklist
- Last record sequence per device matches pre-incident snapshot
- Hash chain continuity verified:
eds verify-chain <exported-records.json> - Operation log shows no unexpected gaps (check timestamps around recovery target)
- Alert suppression lifted after verification completes
S3 / MinIO — object restore
# Restore a specific object version
aws s3api get-object \
--bucket <BUCKET> \
--key <OBJECT_KEY> \
--version-id <VERSION_ID> \
<OUTPUT_FILE>
Failure Drill Schedule
Run the following drills quarterly to verify runbook accuracy:
| Drill | Procedure | Pass criterion |
|---|---|---|
| DB failover | Stop primary Postgres; promote replica | Ingest resumes in < 30 min |
| DB restore | PITR to 1 hour ago on staging | Chain continuity verified in < 30 min |
| S3 object recovery | Restore a deleted test object | Object byte-identical to original |
| Alert fire | Inject a bad signature via test harness | P1 alert fires within 2 min |