Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

EdgeSentry-Inspect

Real-time digital twin audit platform for infrastructure inspection.

What it does

EdgeSentry-Inspect detects construction and structural deviations at the field edge by fusing 3D point clouds with BIM design data — no cloud round-trip required during inspection.

3D sensor (LiDAR/ToF)
    │  point cloud
    ▼
trilink-core::project          ← 3D → 2D depth map / height map
    │  depth map (image)
    ▼
Vision AI inference            ← anomaly detection on local GPU
    │  bounding boxes + class
    ▼
trilink-core::unproject        ← 2D detections → 3D world coords
    │  world-space anomaly points
    ▼
Scan-vs-BIM engine             ← compare against IFC design geometry
    │  deviation heatmap + report
    ▼
Field display (tablet / AR)    ← inspector sees deviation on site
    │
    ▼  (upload report only — not the raw point cloud)
Cloud audit store              ← immutable evidence + digital twin update

Why edge-first

The field PC handles everything from scan to deviation report. Only the report (JSON + PNG heatmap) is uploaded. This makes a 30-minute on-site inspection feasible even without a reliable cloud connection.

Built on

  • trilink-core — point cloud projection and spatial fusion (Rust)
  • edgesentry-rs — cryptographically verifiable audit records (optional, for high-assurance contexts)

License

MIT OR Apache-2.0

Why EdgeSentry-Inspect?

This document explains the problem EdgeSentry-Inspect addresses, the pain points of current inspection practice, and how it differs from existing solutions on the market.


The problem

Infrastructure inspection — whether on a construction site or a ship hull — is one of the last major engineering workflows that still relies heavily on manual measurement: a person with a spirit level, a tape measure, and a clipboard.

For construction handover inspections, a three-person team typically spends 45–60 minutes per residential unit verifying wall flatness, floor levelness, ceiling height, and opening dimensions. For a 320-unit building, that is 6–8 weeks of inspection time. Disputes about marginal non-conformances are common, because measurements are taken by hand and are not spatially repeatable.

For maritime hull surveys, 30–40 surveyors work for 3–5 days to cover a single vessel. Results are recorded on paper sketches. There is no digital record that can be compared against the survey from three years ago. Every classification renewal starts from scratch.

Neither workflow can meet the demands of modern regulatory programmes, which increasingly require automated, auditable, and spatially precise inspection records.


Pain points

Speed: Manual measurement cannot meet the 30-minute inspection window required for automated regulatory compliance in construction. A 4-hour autonomous robot hull survey is impossible without a fully offline pipeline.

Precision: Human measurements with a tape measure and spirit level carry ±5–10 mm variability. For structural elements where the tolerance is 10 mm, that variability is the entire tolerance budget. Results are not repeatable between inspectors.

Cost: Large teams, long timelines, and repeated work to resolve disputes all accumulate cost. The majority of that cost is labour — not capital equipment — which makes it a fixed operational expense that does not scale down.

No spatial context: A manual report says “Column C4 is 12 mm out of tolerance.” It does not say which face, at what height, over what area. Without spatial context, the contractor cannot confirm the finding or plan a targeted remediation.

No historical comparison: For maritime assets, there is no automated way to compare the current inspection against the one from the previous survey cycle. Structural degradation trends are invisible until a failure occurs.

Connectivity constraints: Construction sites and vessels often have limited or no internet connectivity during the inspection window. Cloud-only platforms cannot return a verdict until the data has been uploaded and processed remotely — which may take hours or be impossible entirely.


Existing solutions and their gaps

CategoryExamplesGap
General 3D scanning softwareFaro Scene, Leica CycloneNo AI anomaly detection; no BIM deviation comparison; results require offline post-processing on a workstation; no edge pipeline for real-time field use
Cloud-based point-cloud platformsMatterport, Autodesk ReCap 360Upload required before any results; unusable with poor or zero connectivity; raw point cloud must leave the site
BIM-to-scan alignment toolsTrimble Connect, Autodesk Construction CloudDesigned for desktop workflows, not edge deployment; require cloud round-trip; no integrated AI inference
General-purpose AI inspectionVarious computer-vision SaaS platformsOutput is images with labels, not millimetre-level spatial deviation measurements; not integrated with BIM geometry
Traditional maritime surveyIACS paper-based proceduresNo digital output; no comparison against prior surveys; not automated or scalable

The common thread across all existing solutions is that they treat the scan, the AI analysis, and the BIM comparison as three separate steps performed in three separate tools, with a cloud upload between each. This is incompatible with the real-world constraints of field inspection: time pressure, connectivity limits, and the need for an on-site verdict.


How EdgeSentry-Inspect is different

Edge-first pipeline: All computation — 3D projection, AI inference, BIM deviation, heatmap, report — runs on the field PC or robot. There is no cloud round-trip before the verdict. The system works with zero internet connectivity.

Integrated flow: The pipeline is a single continuous flow: point cloud → AI inference → deviation against BIM design → heatmap → JSON report. There are no hand-off steps between disconnected tools.

Spatial precision: Every anomaly is located in millimetres relative to the approved BIM design geometry. The report includes world-space coordinates, deviation magnitude, and AI classification — not just a photograph.

Open and hardware-independent: Built on open components: trilink-core (Rust), standard IFC files, any AI inference endpoint that accepts images. No proprietary sensor, cloud, or license required.

Maritime-ready: The pipeline handles offline buffering natively. The deviation log accumulates on the robot during a mission with zero connectivity, then syncs after docking. The report payload (1–6 MB) is sized for VDES terrestrial bandwidth, the IMO-standardised maritime data link used in port approaches and coastal waters.

Optional cryptographic audit: For high-assurance contexts — regulatory submissions, legally binding structural sign-off, maritime class certification — the deviation report can be signed with Ed25519 and hash-chained using edgesentry-rs. This produces an audit record that can be verified independently of EdgeSentry-Inspect infrastructure, with cryptographic proof that the report was not altered after the fact.

EdgeSentry-Inspect — Requirements

For deep-dive step-by-step flows, case studies, and implementation order, see scenarios.md.

Use cases

UC-1: Construction site inspection

ItemDetail
TriggerInspector arrives on site with a 3D sensor device
ConstraintFull scan and verdict for one unit within 30 minutes
OutputPass / fail verdict per element; deviation heatmap; deviation report
Regulatory targetCONQUAS automated inspection criteria
Data flowScan → edge PC → verdict displayed on tablet; report uploaded to common data environment

The 30-minute constraint makes a cloud round-trip infeasible. All computation from point cloud to deviation report must complete on the field PC.

UC-2: Maritime structure inspection

ItemDetail
TriggerAutonomous robot completes a hull or confined-space scan mission
ConstraintIntermittent or zero connectivity during the mission
OutputStructural-change flags (real-time, edge); full deviation report (post-mission, cloud)
Regulatory targetMaritime Digital Twin integration
Data flowRobot scans → edge pipeline → flag emitted if anomaly exceeds threshold → report synced to central system after mission

KPIs

KPITargetRationale
Inspection time reduction≥ 50% vs manualReplaces manual measurement per element
Labour reduction≥ 80% vs manualAutomated deviation computation and reporting
Productivity improvement≥ 20% overallMinimum threshold required by regulatory programmes
Deviation detection accuracy≤ 5 mm errorStructural tolerance for concrete and steel elements
Time to verdict (UC-1)≤ 30 min per unitHard constraint from CONQUAS automated inspection programme
Report upload latency (UC-2)Best-effort; no hard limit during missionMission-critical flag delivered immediately; report synced after

Non-functional requirements

RequirementDetail
Offline operationEdge pipeline must work with zero internet connectivity
ImmutabilityUploaded reports must be stored in append-only, tamper-evident storage (Object Lock WORM)
AuditabilityEvery deviation report must carry a timestamp, sensor serial, and IFC model reference
No raw point cloud uploadOnly the deviation report is uploaded — reduces bandwidth and avoids data sovereignty issues
Hardware independenceEdge pipeline must run on a standard field PC with a consumer GPU; no proprietary cloud hardware

Accuracy requirements by scenario

UC-1: Construction site inspection

ParameterTargetDriver
Deviation detection threshold10 mmStructural concrete tolerance
Position accuracy of anomaly location≤ 10 mmConsistent with deviation threshold
False positive rate< 5%Inspector must trust the system; too many false flags causes rejection
Coverage report≥ 80% of design surface scannedPartial scans produce misleading compliant_pct if coverage is not reported

UC-2: Maritime structure inspection

ParameterTargetDriver
Deviation detection threshold5 mmHull deformation tolerance; structural safety standard
Position accuracy of anomaly location≤ 5 mmConsistent with deviation threshold
Mission duration without connectivityUp to 4 hoursConfined hull inspection mission length
Sync latency after docking< 5 minutesControl centre needs updated state promptly after mission

EdgeSentry-Inspect — Scenario Analysis

This document walks through the two deployment scenarios in depth: what happens step by step, where the hard problems are, concrete case studies, and the recommended order to implement them.

For the high-level requirements and KPIs behind these scenarios, see requirements.md. For the system architecture that supports both scenarios, see architecture.md.


Scenario 1: Construction Site Inspection (CONQUAS-style)

Context

A site inspector arrives at a partially completed building with a 3D sensor device (handheld or mounted on a small rover). They need to verify that concrete work, rebar placement, wall surfaces, and structural elements conform to the approved BIM design within the specified tolerance (typically 10 mm for concrete). The entire inspection of one unit must be completed and a pass/fail verdict produced before the inspector leaves — the constraint is 30 minutes.

The inspector cannot wait for a cloud round-trip. The field PC must handle everything from scan to verdict.

Step-by-step flow

Step 1 — Load the design

The inspector selects the IFC file for this unit on the field PC. edgesentry-inspect::ifc loads the reference geometry into a design point cloud. This happens once per session and is cached for the rest of the inspection.

Step 2 — Scan the space

The inspector walks the room with the 3D sensor. The sensor streams a continuous point cloud (PointCloud) to the field PC. trilink-core::PoseBuffer records the sensor pose at each capture timestamp.

Step 3 — Project to depth map

For each sweep, trilink-core::project_to_depth_map converts the 3D point cloud to a 2D depth map (DepthMap). Simultaneously, trilink-core::project_to_height_map produces a top-down HeightMap of the floor area. These two images are streamed to the AI inference service.

Step 4 — AI inference (local GPU)

The inference service runs on the field PC’s GPU. It receives the depth map and height map as images and returns a Vec<Detection>: anomaly bounding boxes with class labels (e.g. rebar_missing, surface_void, misalignment) and confidence scores.

Step 5 — Restore 3D coordinates

For each detection, trilink-core::unproject maps the bounding box centre back to a world-space Point3D using the depth at that pixel and the pose recorded at capture time.

Step 6 — Compute deviation

edgesentry-inspect::deviation runs a k-d tree nearest-neighbour search: for every scan point, it finds the nearest design point and records the distance in millimetres. Points beyond the configured threshold (default 10 mm) are flagged.

Step 7 — Generate heatmap and report

edgesentry-inspect::heatmap projects the flagged points back to 2D with colour-coded deviation (green / yellow / red). edgesentry-inspect::report writes the JSON deviation report containing compliant_pct, max_deviation_mm, mean_deviation_mm, and the anomaly list.

Step 8 — Inspector reviews on site

The heatmap and report appear on the inspector’s tablet. The inspector sees exactly which elements failed and by how much. The pass/fail verdict is shown before they leave the room. Total elapsed time from scan start to verdict: target under 30 minutes.

Step 9 — Upload audit evidence

edgesentry-inspect::sync uploads the report JSON and heatmap PNG to the cloud audit store (S3 Object Lock WORM). The raw point cloud is not uploaded — only the report. The upload happens in the background and does not block the on-site verdict.

What makes this scenario difficult

ChallengeDetail
30-minute hard constraintEvery processing step must run on the field PC without cloud round-trips. The projection and deviation steps must complete in seconds, not minutes.
IFC geometry fidelityIFC files for large buildings can be complex. The ifc.rs loader must extract only the relevant geometry for the current unit without loading the entire building model.
Partial scansAn inspector may not scan every surface perfectly. The deviation engine must report coverage (what percentage of the design surface was actually scanned) alongside deviation.
OcclusionsScaffolding, equipment, and workers occlude the scene. Points behind foreground objects must not be mis-attributed to the design surface behind them. The Z-buffer in project_to_depth_map handles this correctly.
Alignment (registration)The scan point cloud and the IFC design cloud live in different coordinate systems until aligned. The scanner’s SLAM map origin must be registered to the IFC global coordinate system before deviation can be computed. If done manually — by an operator identifying matching landmarks in the scan and the IFC model — the result depends on operator skill and introduces inconsistency between inspectors. The practical solution is fiducial markers (e.g. ArUco or AprilTag targets) placed at IFC-known coordinates before the inspection begins. The SLAM system detects the markers automatically and computes the registration without operator judgement. Manual alignment with 3 control points typically takes 5–15 minutes per unit; fiducial-assisted alignment reduces this to under 1 minute and removes operator variability.

For detailed accuracy requirements for this scenario, see requirements.md.


Scenario 2: Maritime Structure Inspection

Context

An autonomous robot (wheeled, crawling, or swimming) conducts a routine inspection of a ship hull, dock structure, or confined-space area. The robot operates independently for the duration of a mission (30 minutes to several hours). Connectivity during the mission ranges from poor to zero — the robot cannot rely on a cloud connection for any decision that needs to happen in real time. Structural changes (new corrosion, deformation, missing fasteners) must be detected and flagged during the mission so the robot can revisit a flagged area or alert the control centre immediately.

The design reference in this scenario may be a previous scan (change detection) rather than an IFC file (deviation from design). Both modes are supported.

Step-by-step flow

Step 1 — Load the reference model

Before mission start, either (a) load an IFC hull design file, or (b) load a baseline point cloud from a previous inspection as the reference for change detection.

Step 2 — Robot begins mission

The robot navigates autonomously along a pre-planned inspection route. The 3D sensor streams a point cloud continuously. trilink-core::PoseBuffer records the sensor pose at each sweep timestamp.

Step 3 — Project and infer (continuous loop, on-board)

For each sweep, trilink-core::project_to_depth_map and AI inference and trilink-core::unproject run in sequence on the robot’s on-board processor. The target latency per sweep is under 2 seconds, so the robot can slow or stop near anomalies in real time.

Step 4 — Deviation / change detection

Scan points are compared against the reference model using edgesentry-inspect::deviation. The maritime threshold is 5 mm — hull deformation tolerance is tighter than construction concrete.

Step 5a — No anomaly: continue mission

The robot continues on the planned route. Scan data accumulates in the local deviation log.

Step 5b — Anomaly exceeds 2× threshold: immediate flag

edgesentry-inspect::sync emits a structural-change flag to the local message queue, or directly to the control centre via radio if connectivity is available at that moment. The robot can optionally slow, stop, or re-scan the flagged area.

Step 6 — Mission complete, robot docks

The robot returns to its docking station on the vessel or at the facility and connects to the vessel’s local network. edgesentry-inspect::sync uploads the full deviation report and heatmap PNG to the cloud digital twin store. The digital twin is updated with the new as-inspected geometry.

The outbound link depends on the operational context when the robot docks:

ContextLinkBandwidthFeasibility for report (~1–6 MB)
Drydock or berthShore-side Ethernet / Wi-Fi10–1000 MbpsInstant
At sea, within ~35 nm of shoreVDES terrestrial (VHF, ITU-R M.2092)up to 307 kbps~30 sec – 3 min
At sea, beyond VHF rangeS-VDES (satellite) or VSAT / Starlink Maritime100 kbps – 100 Mbpsseconds to minutes
FallbackAIS messaging (legacy, pre-VDES)~10 kbpsmarginal; JSON only, no PNG

VDES is the IMO-standardised next-generation maritime data exchange system and the natural fit for ship-to-shore report delivery within port approaches and coastal waters. It is also the communications layer underpinning national port authority digital twin strategies, making it the recommended option for integration with those platforms. The raw point cloud is not uploaded — only the report. The 1–6 MB payload is well within VDES bandwidth even at the lower end of coastal range.

Step 7 — Control centre review

Engineers review the uploaded report. Flagged structural changes are prioritised for maintenance. The updated digital twin shows the current state of the asset.

What makes this scenario difficult

ChallengeDetail
Zero connectivity during missionNo cloud calls are possible in confined spaces or underwater. Every decision must be made locally. The robot must buffer the full deviation log and sync it after docking.
AI model must run on robot hardwareThe inference service runs on the robot’s on-board SoC (NVIDIA Jetson, similar). The model must be quantised to fit memory and compute constraints without degrading accuracy below the 5 mm detection threshold. This quantisation is an ongoing operational burden — every model update requires re-quantisation and re-validation.
Change detection vs. design deviationFor vessels where the as-built state already deviates from the original design (common in older ships), using the IFC design file as the reference produces false positives. Instead, a previously accepted baseline scan is used as the reference. The system must support both modes.
Pose accuracy in GPS-denied environmentsSLAM accuracy degrades in featureless confined spaces (smooth hull plates, flooded bilge tanks). Pose drift accumulates over a long mission and degrades the 5 mm position accuracy requirement. Loop closure or fiducial markers must be used at regular intervals.
Variable lighting and surface conditionsCorrosion, marine growth, and water accumulation on hull surfaces affect the point cloud density and AI inference quality differently than a clean construction site.

For detailed accuracy requirements for this scenario, see requirements.md.


Deployment Comparison

AspectScenario 1 (Construction)Scenario 2 (Maritime)
ConnectivityAvailable (field PC on Wi-Fi or LTE)Not available during mission
Reference modelIFC design fileIFC or previous scan (change detection)
Deviation threshold10 mm5 mm
Time constraint30 min (hard)Per-mission (hours); verdict not needed on site
Inference hardwareField PC GPU (no quantisation)Robot SoC (quantisation required)
On-site feedbackInspector tablet / AR headsetRobot slows / stops at anomaly; flag to control centre
Cloud sync triggerAfter verdict, in backgroundAfter docking
Model update cadenceUpdate inference endpoint onlyRe-quantise + re-validate + redeploy to robot
Alignment complexitySLAM → IFC registration; fiducial markers strongly recommended to remove operator variabilitySame, plus SLAM drift management over long missions
EdgeSentry-Inspect code changeNoneNone — inference.base_url points to localhost; threshold configured
Overall difficultyMediumHigh

The EdgeSentry-Inspect codebase is identical for both scenarios. The difficulty difference is almost entirely in the inference hardware layer (quantisation for Scenario 2) and the connectivity layer (offline buffering for Scenario 2).


Case Studies

Case Study A — High-rise apartment handover inspection (Scenario 1)

Operator: A main contractor delivering a 40-storey residential tower.

Environment: Indoor apartment units, typically 60–90 m² each. 320 units total. Elevator access. 240V power available throughout.

Problem: Before handover to the developer, every unit must pass a structural inspection: wall flatness, floor levelness, ceiling height, opening dimensions. Currently, a three-person team spends 45–60 minutes per unit using spirit levels and tape measures. At 320 units the total inspection takes 6–8 weeks. Disputes about marginal non-conformances are common.

Deployment:

  • Inspector brings a field PC and a handheld 3D sensor into each unit.
  • IFC model for the floor plate is pre-loaded on the field PC (one file covers all units on that level with offsets).
  • Inspector scans the unit in a single walk-through (approximately 15 minutes).
  • EdgeSentry-Inspect produces a deviation report showing all elements outside the 10 mm tolerance within 5 minutes of scan completion.
  • Total time per unit: ~20 minutes including setup.

Outcome:

  • Inspection time reduced from 45–60 minutes to 20 minutes per unit (55% reduction).
  • Non-conformances are documented with millimetre-level precision and photographic evidence — disputes are resolved by the data, not by argument.
  • Report is uploaded to the project’s common data environment and linked to the IFC model automatically.

Why this scenario is straightforward: Stable indoor environment, good sensor range, no connectivity constraint. The field PC has a standard GPU. No model quantisation required.


Case Study B — Public infrastructure: MRT station concourse (Scenario 1)

Operator: A civil engineering contractor completing a new metro station.

Environment: Large open concourse, ~2,000 m² floor area, ceiling height 8–12 m. Construction is ongoing in adjacent areas. Equipment and workers present during inspection windows.

Problem: Inspection windows are narrow (2–4 hours at night) due to ongoing construction. A full flatness and alignment survey of all structural elements must be completed within the window. Traditional total-station survey takes 6–8 hours for a space this size. The station cannot open without a signed-off deviation report for each structural element.

Deployment:

  • A rover-mounted 3D sensor is driven through the concourse by a single operator.
  • The 8–12 m ceiling height requires a sensor with longer range than a handheld device (trade-off: lower density at distance).
  • EdgeSentry-Inspect adjusts the deviation threshold dynamically by element type: 5 mm for column faces, 15 mm for wall panels at ceiling height.
  • The full concourse scan is completed in 90 minutes; the deviation report is ready 10 minutes after the scan ends.

Key complexity vs. Case Study A:

  • Dynamic threshold by element type (not a single global threshold).
  • Large scan area requires stitching multiple sweeps (the SLAM system handles this; EdgeSentry-Inspect receives a unified point cloud).
  • Worker and equipment occlusions are higher — coverage reporting is critical to flag under-inspected areas.

Case Study C — Drydock hull inspection (Scenario 2)

Operator: A ship repair yard conducting a class renewal survey for a 180-metre bulk carrier.

Environment: Ship in drydock. Hull is accessible from ground level and via scaffolding. Some confined ballast tank spaces require a crawling robot. No cellular coverage inside the tanks.

Problem: A class renewal survey requires documenting the thickness and surface condition of the entire hull. Traditional ultrasonic thickness gauging and visual inspection requires 30–40 surveyors working for 3–5 days. The yard wants to reduce survey time to 1 day and produce a digital record that can be compared against the vessel’s previous survey.

Deployment (external hull, drydock):

  • A wheeled robot with a 3D sensor and on-board GPU crawls the external hull surface.
  • Reference model: previous survey scan (3 years ago), not the original IFC (the vessel has been modified since build).
  • EdgeSentry-Inspect runs in change-detection mode: new scan vs. previous scan.
  • Deviation > 5 mm (interpreted as surface wastage or deformation) triggers an immediate flag to the yard control room via radio (connectivity available for external hull in drydock).
  • Robot completes the external hull in 8 hours. Report uploaded at mission end.

Deployment (confined ballast tanks):

  • A smaller crawling robot enters the tank through the access manhole.
  • No connectivity inside the tank.
  • Deviation flags accumulate in the local buffer.
  • When the robot exits the tank, deviation log is synced automatically.
  • Control room reviews all tank reports after the tank inspection session.

Outcome:

  • Survey time reduced from 3–5 days to approximately 28 hours (hull + tanks).
  • Digital deviation map is directly comparable against the previous survey — structural change over the 3-year period is immediately visible as a colour-coded overlay.
  • Classification society accepts the digital report as primary evidence (paper sketches are no longer required).

Key complexity vs. Case Study A:

  • Two sub-scenarios in one deployment: connected external hull + disconnected confined tanks.
  • Quantisation required for the crawling robot SoC.
  • Change detection mode instead of IFC deviation mode.
  • Sync-after-docking logic must handle partial reports gracefully (robot may need to exit and re-enter a tank multiple times).

Implement Scenario 1 (construction, connected) first.

Rationale

  1. The 30-minute constraint validates the entire edge pipeline. If the full cycle — project → infer → unproject → deviation → report — can run within 30 minutes on a field PC, Scenario 2 (which has no hard on-site time constraint) is straightforwardly achievable with the same code.

  2. No quantisation dependency. Scenario 1 runs on a standard field PC GPU. There is no dependency on a robot hardware team or model quantisation toolchain. The pipeline can be built, tested, and demonstrated without hardware partners.

  3. IFC deviation mode is the foundation for change-detection mode. Scenario 2’s change-detection mode (new scan vs. previous scan) reuses the entire deviation engine — the “design reference cloud” is simply replaced by a previous scan cloud. Implementing IFC deviation first means Scenario 2 requires no structural code change.

  4. A working Scenario 1 deployment is the proof of value needed to justify Scenario 2 investment. Convincing a ship repair yard or port authority to trial an autonomous robot requires evidence that the AI + deviation pipeline produces reliable results. A construction site handover inspection (lower operational complexity, easier access, controlled environment) is the right first deployment to generate that evidence.

  5. Scenario 2 adds dependencies outside EdgeSentry-Inspect’s control. Robot SoC quantisation, SLAM accuracy in GPS-denied environments, and mission planning are all provided by the robot platform partner. Those integrations are easier to negotiate and execute after a live Scenario 1 deployment has demonstrated the pipeline’s accuracy.

Suggested phasing

PhaseScenarioTarget use casePrerequisite
Phase 1Construction site inspectionApartment handover, civil infrastructuretrilink-core #30–#34 merged; M2–M4 complete
Phase 2Maritime — external (connected)Drydock hull survey, dock structurePhase 1 reference deployment; at least one confirmed customer
Phase 3Maritime — confined (offline robot)Ballast tanks, engine rooms, underwaterPhase 2 complete; robot partner confirms quantisation and offline sync

Phase 3 requires no changes to EdgeSentry-Inspect code. The investment is entirely in the robot platform integration layer (quantisation, fleet management, sync-after-docking retry logic) — work that is justified by Phase 2 results.

For a breakdown of the factors that determine measurement accuracy in the field, see architecture.md.

EdgeSentry-Inspect — Architecture

Edge-cloud split

┌──────────────────────────────────────────────────────────┐
│  FIELD PC (Edge)                                         │
│                                                          │
│  3D sensor (LiDAR / ToF)                                 │
│      │  point cloud (PointCloud)                         │
│      ▼                                                   │
│  trilink-core::project_to_depth_map                      │
│  trilink-core::project_to_height_map                     │
│      │  DepthMap  HeightMap                              │
│      ▼                                                   │
│  AI inference (built-in model or HTTP endpoint)         │
│      │  Vec<Detection>  (BBox2D + class + confidence)    │
│      ▼                                                   │
│  trilink-core::unproject                                 │
│      │  world-space Point3D per detection                │
│      ▼                                                   │
│  edgesentry-inspect::ifc      — IFC geometry             │
│  edgesentry-inspect::deviation — deviation (mm)          │
│  edgesentry-inspect::heatmap   — heatmap PNG             │
│  edgesentry-inspect::report    — JSON report             │
│      │                                                   │
│      ├── displayed on tablet / AR headset immediately    │
└──────┬───────────────────────────────────────────────────┘
       │  report JSON + heatmap PNG  (not raw point cloud)
       ▼
┌──────────────────────────────────────────────────────────┐
│  CLOUD (Audit Store / Digital Twin)                      │
│                                                          │
│  edgesentry-inspect::sync                                │
│      │  S3-compatible upload (Object Lock WORM)          │
│      │  structural-change flag → message queue           │
│      ▼                                                   │
│  Audit report store   — immutable evidence               │
│  Digital twin update  — as-built IFC delta               │
│  Central dashboard    — fleet-wide deviation trends      │
└──────────────────────────────────────────────────────────┘

What runs on the field PC

StepWhy edge
3D → 2D projectionPoint clouds are gigabytes; projecting locally avoids upload before verdict
AI inferenceSub-second latency; local GPU; works offline
2D → 3D unprojectionNeeded for on-site AR feedback
IFC load + deviation computationInspector must see deviation before leaving the site
Heatmap + report generationReport is the upload artefact; must be ready on site

What goes to the cloud

DataWhy cloud
Deviation report (JSON)Immutable audit evidence; regulatory archive
Heatmap (PNG)Human-readable evidence attached to the report
Structural-change flagReal-time alert to central monitoring (UC-2)
As-built IFC deltaPersistent update to the digital twin asset model

Component design

edgesentry-inspect::ifc

  • Input: IFC file path (.ifc)
  • Output: Vec<Point3D> — design reference point cloud sampled from wall/slab/column geometry
  • Implementation: ifcopenshell via Python FFI (pyo3) or a native Rust IFC reader
  • The reference cloud is loaded once per inspection session and cached in memory

edgesentry-inspect::deviation

  • Input: scan Vec<Point3D> (from trilink-core::unproject) + design Vec<Point3D> (from ifc)
  • Output: per-scan-point deviation f32 in metres
  • Algorithm: k-d tree nearest-neighbour search (kiddo crate); O(n log m) per scan
  • Threshold: configurable (default 10 mm for construction, 5 mm for maritime hull)

edgesentry-inspect::heatmap

  • Input: scan points + per-point deviation values
  • Output: PNG image — deviation mapped to colour (green ≤ threshold, yellow 2×, red 4×+)
  • Reuses trilink-core::project_to_depth_map to position coloured points in 2D

edgesentry-inspect::report

JSON schema:

{
  "capture_ts_us": 1711234567000000,
  "ifc_ref": "building-A-floor-3-v12.ifc",
  "scan_point_count": 142850,
  "compliant_pct": 94.2,
  "max_deviation_mm": 23.1,
  "mean_deviation_mm": 3.8,
  "anomalies": [
    {
      "world_pos": { "x": 12.3, "y": 4.1, "z": 2.05 },
      "deviation_mm": 23.1,
      "ai_class": "rebar_missing",
      "ai_confidence": 0.91
    }
  ]
}

edgesentry-inspect::sync

  • Uploads report JSON and heatmap PNG to an S3-compatible audit store (Object Lock WORM)
  • Emits a structural-change flag to a message queue (SQS or MQTT) when any anomaly exceeds 2× the configured threshold
  • Reuses the S3-compatible interface pattern from edgesentry-rs

AI inference modes

EdgeSentry-Inspect supports two inference backends, selected by inference.mode in config.toml. Both produce the same Vec<Detection> output consumed by the rest of the pipeline.

Built-in model (inference.mode = "builtin")

A lightweight defect-detection model bundled with EdgeSentry-Inspect. Runs in-process via ONNX Runtime — no external server or network access required.

  • Input: DepthMap + HeightMap images produced by trilink-core
  • Output: Vec<Detection> — bounding boxes with class labels and confidence scores
  • Initial class coverage: surface_void, misalignment, rebar_exposure
  • Hardware: runs on a standard field PC CPU; no dedicated GPU required for basic use

Use builtin for getting started quickly, offline-only deployments, or when no vendor model is available.

External HTTP endpoint (inference.mode = "http")

The inference client POSTs the depth map and height map to inference.base_url and receives a detection list. The endpoint can be:

  • A vendor’s model server running locally on the field PC or robot (same host, no internet needed)
  • A specialised cloud inference API (Scenario 1 / connected deployments only)

This mode is the integration point for vendor collaboration. Vendors implement the server side with their own model; EdgeSentry-Inspect calls it with a fixed schema. The operator sets inference.base_url in config — no code change required.

Interface contract:

POST /detect
Content-Type: multipart/form-data
  depth_map: <PNG bytes>
  height_map: <PNG bytes>

200 OK
[{"x":120,"y":45,"w":30,"h":20,"class":"surface_void","confidence":0.87}, ...]
ModeWhen to use
builtinNo vendor model; offline-only; getting started
http — local vendor serverPartner model on the same device; no internet needed
http — cloud APIScenario 1 (connected); vendor hosts the model remotely

Optional: cryptographically verifiable audit records

If the inspection context requires mathematically verifiable, tamper-evident audit records — for example, regulatory submissions where a third party must independently verify that a report was not altered after the fact — the deviation report can be signed and hash-chained using edgesentry-rs.

edgesentry-rs provides:

CapabilityHow it applies to EdgeSentry-Inspect
Ed25519 payload signingThe field PC signs each deviation report with a device key stored in a hardware secure element — proof that the report came from a specific sensor device
BLAKE3 hash chainingEach report carries prev_record_hash, forming a chain — a missing or reordered report is immediately detectable
Sequence monotonicityReport sequence numbers are strictly increasing — replay and deletion are cryptographically detectable
IngestService::ingest()Cloud-side gate re-verifies signature and hash chain on upload — rejects tampered or out-of-sequence reports

This layer is opt-in. For standard construction inspections, the S3 Object Lock WORM store (edgesentry-inspect::sync) is sufficient. For high-assurance contexts (maritime hull certification, legally binding structural sign-off), wrapping the report in an edgesentry-rs AuditRecord before upload provides a cryptographic audit trail that can be verified independently of EdgeSentry-Inspect infrastructure.


Accuracy factors

Target accuracy is 10 mm for construction (UC-1) and 5 mm for maritime (UC-2). The following table shows the main factors that determine measurement accuracy in the field and how each is mitigated.

FactorImpactMitigation
3D sensor accuracyPrimary driverUse a sensor rated for the target accuracy at the required range
SLAM pose accuracyPropagates into deviation computationLoop closure at regular intervals; fiducial markers in featureless spaces
IFC alignment errorShifts the entire deviation mapUse ≥ 3 known control points for IFC-to-SLAM registration; verify residuals < 2 mm. For consistent results regardless of operator, place fiducial markers (ArUco / AprilTag) at IFC-known coordinates before the inspection — the SLAM system detects them automatically and removes manual judgement from the registration step.
Projection round-trip errorVerified < 1 mm by trilink-core round-trip test (#34)Arithmetic error is not a significant contributor
k-d tree resolutionNearest-neighbour search accuracyDesign cloud sampled at ≤ 2 mm pitch (finer than the detection threshold)

Technology summary

ComponentLanguageKey dependencies
edgesentry-inspect (deviation engine)Rusttrilink-core, kiddo (k-d tree), image (PNG), pyo3 (IFC via Python)
edgesentry-inspect (CLI)Rustclap, tokio, reqwest (inference client), serde_json
edgesentry-inspect (cloud sync)RustS3-compatible HTTP client (reuse edgesentry-rs interface)
IFC geometryPython (via pyo3)ifcopenshell
AI inference — built-inRust + ONNX RuntimeBundled lightweight defect detection model (ort crate)
AI inference — externalHTTP (reqwest)Vendor endpoint: POST image → Vec<BBox2D>; local or cloud
Cloud audit storeAWSS3 + Object Lock (WORM), SQS

Open datasets for PoC

DomainDatasetPurpose
ConstructionBIMNet (public IFC models)Reference design geometry for scan-vs-BIM
ConstructionETH3D / S3DIS point cloudsSample scan clouds for deviation testing
MaritimeMBES survey dataHull scan point clouds
GeneralNYU Depth V2Depth map validation for projection correctness

EdgeSentry-Inspect — Roadmap

The following are prerequisites for all EdgeSentry-Inspect milestones. They are tracked and implemented in the trilink-core repository.

IssueDeliverableStatus
#30PointCloud, DepthMap, HeightMap typesTodo
#31project_to_depth_map (3D → depth map)Todo
#32project_to_height_map (3D → height map)Todo
#33docs/math.md forward projection sectionsTodo
#34Project → unproject round-trip testsTodo

Do not start M2 until #30, #31, #32, and #34 are merged.


M2 — IFC Loader and Deviation Engine

Goal: Given a scanned point cloud and an IFC design file, compute a per-point deviation in millimetres.

Deliverables:

  • Cargo.toml — workspace root; member: crates/edgesentry-inspect
  • src/ifc.rs — load IFC geometry as Vec<Point3D> (design reference cloud)
  • src/deviation.rs — k-d tree nearest-neighbour deviation; configurable threshold
  • src/report.rs — JSON report serialisation (schema in architecture.md)
  • Integration test: load sample IFC fixture → compute deviation against known scan cloud → assert compliant_pct, max_deviation_mm, mean_deviation_mm

M3 — Heatmap Rendering

Goal: Produce a PNG heatmap that maps per-point deviation to colour, positioned in 2D using the depth map projection.

Deliverables:

  • src/heatmap.rs — deviation → RGB colour (green ≤ threshold, yellow 2×, red 4×+) → PNG via image crate
  • Reuses trilink-core::project_to_depth_map to position each coloured point in 2D
  • Integration test: known deviation values → verify expected pixel colours at expected positions in output PNG

M4 — Field PC Pipeline (CLI)

Goal: End-to-end pipeline on the field PC from point cloud to deviation report, runnable as a single CLI command.

Deliverables:

  • src/main.rs — CLI: edgesentry-inspect scan --config config.toml
  • Wires: point cloud ingress (trilink-core::FrameSource) → project_to_depth_map → AI inference client → unproject → deviation → heatmap → report
  • Config: IFC file path, inference.mode (builtin | http), inference endpoint URL (if http), deviation threshold, output directory
  • End-to-end test with MockSource + mock inference server: report produced, all fields correct, heatmap PNG written

M5 — Cloud Sync

Goal: Upload the deviation report and heatmap to the immutable audit store; emit structural-change flags.

Deliverables:

  • src/sync.rs — S3-compatible upload (Object Lock WORM); structural-change flag → SQS or MQTT when anomaly exceeds 2× threshold
  • Integration test: mock S3 + mock SQS → assert report uploaded, flag published for above-threshold anomaly, no flag for below-threshold

M6 — Built-in Inference Model

Goal: Ship a lightweight ONNX defect-detection model with EdgeSentry-Inspect so that inference.mode = "builtin" works out of the box without an external server.

Deliverables:

  • src/inference/mod.rsInferenceBackend trait; dispatches to built-in or HTTP based on inference.mode
  • src/inference/builtin.rs — ONNX Runtime runner (ort crate); loads bundled model weights
  • src/inference/http.rs — HTTP client extracted from M4 into the same module for parity
  • models/detect.onnx — initial model covering surface_void, misalignment, rebar_exposure
  • Integration test: run built-in model on a sample depth map → assert detections are non-empty and class labels are valid

Dependency graph

trilink-core #30, #31, #32, #34  (foundation — must be done first)
    └── M2 (IFC loader + deviation engine)
         └── M3 (heatmap rendering)
              └── M4 (field PC pipeline CLI)
                   ├── M5 (cloud sync)
                   └── M6 (built-in inference model)