EdgeSentry-Inspect
Real-time digital twin audit platform for infrastructure inspection.
- Repository: github.com/edgesentry/edgesentry-inspect
- Documentation: edgesentry.github.io/edgesentry-inspect
What it does
EdgeSentry-Inspect detects construction and structural deviations at the field edge by fusing 3D point clouds with BIM design data — no cloud round-trip required during inspection.
3D sensor (LiDAR/ToF)
│ point cloud
▼
trilink-core::project ← 3D → 2D depth map / height map
│ depth map (image)
▼
Vision AI inference ← anomaly detection on local GPU
│ bounding boxes + class
▼
trilink-core::unproject ← 2D detections → 3D world coords
│ world-space anomaly points
▼
Scan-vs-BIM engine ← compare against IFC design geometry
│ deviation heatmap + report
▼
Field display (tablet / AR) ← inspector sees deviation on site
│
▼ (upload report only — not the raw point cloud)
Cloud audit store ← immutable evidence + digital twin update
Why edge-first
The field PC handles everything from scan to deviation report. Only the report (JSON + PNG heatmap) is uploaded. This makes a 30-minute on-site inspection feasible even without a reliable cloud connection.
Built on
trilink-core— point cloud projection and spatial fusion (Rust)edgesentry-rs— cryptographically verifiable audit records (optional, for high-assurance contexts)
License
MIT OR Apache-2.0
Why EdgeSentry-Inspect?
This document explains the problem EdgeSentry-Inspect addresses, the pain points of current inspection practice, and how it differs from existing solutions on the market.
The problem
Infrastructure inspection — whether on a construction site or a ship hull — is one of the last major engineering workflows that still relies heavily on manual measurement: a person with a spirit level, a tape measure, and a clipboard.
For construction handover inspections, a three-person team typically spends 45–60 minutes per residential unit verifying wall flatness, floor levelness, ceiling height, and opening dimensions. For a 320-unit building, that is 6–8 weeks of inspection time. Disputes about marginal non-conformances are common, because measurements are taken by hand and are not spatially repeatable.
For maritime hull surveys, 30–40 surveyors work for 3–5 days to cover a single vessel. Results are recorded on paper sketches. There is no digital record that can be compared against the survey from three years ago. Every classification renewal starts from scratch.
Neither workflow can meet the demands of modern regulatory programmes, which increasingly require automated, auditable, and spatially precise inspection records.
Pain points
Speed: Manual measurement cannot meet the 30-minute inspection window required for automated regulatory compliance in construction. A 4-hour autonomous robot hull survey is impossible without a fully offline pipeline.
Precision: Human measurements with a tape measure and spirit level carry ±5–10 mm variability. For structural elements where the tolerance is 10 mm, that variability is the entire tolerance budget. Results are not repeatable between inspectors.
Cost: Large teams, long timelines, and repeated work to resolve disputes all accumulate cost. The majority of that cost is labour — not capital equipment — which makes it a fixed operational expense that does not scale down.
No spatial context: A manual report says “Column C4 is 12 mm out of tolerance.” It does not say which face, at what height, over what area. Without spatial context, the contractor cannot confirm the finding or plan a targeted remediation.
No historical comparison: For maritime assets, there is no automated way to compare the current inspection against the one from the previous survey cycle. Structural degradation trends are invisible until a failure occurs.
Connectivity constraints: Construction sites and vessels often have limited or no internet connectivity during the inspection window. Cloud-only platforms cannot return a verdict until the data has been uploaded and processed remotely — which may take hours or be impossible entirely.
Existing solutions and their gaps
| Category | Examples | Gap |
|---|---|---|
| General 3D scanning software | Faro Scene, Leica Cyclone | No AI anomaly detection; no BIM deviation comparison; results require offline post-processing on a workstation; no edge pipeline for real-time field use |
| Cloud-based point-cloud platforms | Matterport, Autodesk ReCap 360 | Upload required before any results; unusable with poor or zero connectivity; raw point cloud must leave the site |
| BIM-to-scan alignment tools | Trimble Connect, Autodesk Construction Cloud | Designed for desktop workflows, not edge deployment; require cloud round-trip; no integrated AI inference |
| General-purpose AI inspection | Various computer-vision SaaS platforms | Output is images with labels, not millimetre-level spatial deviation measurements; not integrated with BIM geometry |
| Traditional maritime survey | IACS paper-based procedures | No digital output; no comparison against prior surveys; not automated or scalable |
The common thread across all existing solutions is that they treat the scan, the AI analysis, and the BIM comparison as three separate steps performed in three separate tools, with a cloud upload between each. This is incompatible with the real-world constraints of field inspection: time pressure, connectivity limits, and the need for an on-site verdict.
How EdgeSentry-Inspect is different
Edge-first pipeline: All computation — 3D projection, AI inference, BIM deviation, heatmap, report — runs on the field PC or robot. There is no cloud round-trip before the verdict. The system works with zero internet connectivity.
Integrated flow: The pipeline is a single continuous flow: point cloud → AI inference → deviation against BIM design → heatmap → JSON report. There are no hand-off steps between disconnected tools.
Spatial precision: Every anomaly is located in millimetres relative to the approved BIM design geometry. The report includes world-space coordinates, deviation magnitude, and AI classification — not just a photograph.
Open and hardware-independent: Built on open components: trilink-core (Rust), standard IFC files, any AI inference endpoint that accepts images. No proprietary sensor, cloud, or license required.
Maritime-ready: The pipeline handles offline buffering natively. The deviation log accumulates on the robot during a mission with zero connectivity, then syncs after docking. The report payload (1–6 MB) is sized for VDES terrestrial bandwidth, the IMO-standardised maritime data link used in port approaches and coastal waters.
Optional cryptographic audit: For high-assurance contexts — regulatory submissions, legally binding structural sign-off, maritime class certification — the deviation report can be signed with Ed25519 and hash-chained using edgesentry-rs. This produces an audit record that can be verified independently of EdgeSentry-Inspect infrastructure, with cryptographic proof that the report was not altered after the fact.
EdgeSentry-Inspect — Requirements
For deep-dive step-by-step flows, case studies, and implementation order, see scenarios.md.
Use cases
UC-1: Construction site inspection
| Item | Detail |
|---|---|
| Trigger | Inspector arrives on site with a 3D sensor device |
| Constraint | Full scan and verdict for one unit within 30 minutes |
| Output | Pass / fail verdict per element; deviation heatmap; deviation report |
| Regulatory target | CONQUAS automated inspection criteria |
| Data flow | Scan → edge PC → verdict displayed on tablet; report uploaded to common data environment |
The 30-minute constraint makes a cloud round-trip infeasible. All computation from point cloud to deviation report must complete on the field PC.
UC-2: Maritime structure inspection
| Item | Detail |
|---|---|
| Trigger | Autonomous robot completes a hull or confined-space scan mission |
| Constraint | Intermittent or zero connectivity during the mission |
| Output | Structural-change flags (real-time, edge); full deviation report (post-mission, cloud) |
| Regulatory target | Maritime Digital Twin integration |
| Data flow | Robot scans → edge pipeline → flag emitted if anomaly exceeds threshold → report synced to central system after mission |
KPIs
| KPI | Target | Rationale |
|---|---|---|
| Inspection time reduction | ≥ 50% vs manual | Replaces manual measurement per element |
| Labour reduction | ≥ 80% vs manual | Automated deviation computation and reporting |
| Productivity improvement | ≥ 20% overall | Minimum threshold required by regulatory programmes |
| Deviation detection accuracy | ≤ 5 mm error | Structural tolerance for concrete and steel elements |
| Time to verdict (UC-1) | ≤ 30 min per unit | Hard constraint from CONQUAS automated inspection programme |
| Report upload latency (UC-2) | Best-effort; no hard limit during mission | Mission-critical flag delivered immediately; report synced after |
Non-functional requirements
| Requirement | Detail |
|---|---|
| Offline operation | Edge pipeline must work with zero internet connectivity |
| Immutability | Uploaded reports must be stored in append-only, tamper-evident storage (Object Lock WORM) |
| Auditability | Every deviation report must carry a timestamp, sensor serial, and IFC model reference |
| No raw point cloud upload | Only the deviation report is uploaded — reduces bandwidth and avoids data sovereignty issues |
| Hardware independence | Edge pipeline must run on a standard field PC with a consumer GPU; no proprietary cloud hardware |
Accuracy requirements by scenario
UC-1: Construction site inspection
| Parameter | Target | Driver |
|---|---|---|
| Deviation detection threshold | 10 mm | Structural concrete tolerance |
| Position accuracy of anomaly location | ≤ 10 mm | Consistent with deviation threshold |
| False positive rate | < 5% | Inspector must trust the system; too many false flags causes rejection |
| Coverage report | ≥ 80% of design surface scanned | Partial scans produce misleading compliant_pct if coverage is not reported |
UC-2: Maritime structure inspection
| Parameter | Target | Driver |
|---|---|---|
| Deviation detection threshold | 5 mm | Hull deformation tolerance; structural safety standard |
| Position accuracy of anomaly location | ≤ 5 mm | Consistent with deviation threshold |
| Mission duration without connectivity | Up to 4 hours | Confined hull inspection mission length |
| Sync latency after docking | < 5 minutes | Control centre needs updated state promptly after mission |
EdgeSentry-Inspect — Scenario Analysis
This document walks through the two deployment scenarios in depth: what happens step by step, where the hard problems are, concrete case studies, and the recommended order to implement them.
For the high-level requirements and KPIs behind these scenarios, see requirements.md. For the system architecture that supports both scenarios, see architecture.md.
Scenario 1: Construction Site Inspection (CONQUAS-style)
Context
A site inspector arrives at a partially completed building with a 3D sensor device (handheld or mounted on a small rover). They need to verify that concrete work, rebar placement, wall surfaces, and structural elements conform to the approved BIM design within the specified tolerance (typically 10 mm for concrete). The entire inspection of one unit must be completed and a pass/fail verdict produced before the inspector leaves — the constraint is 30 minutes.
The inspector cannot wait for a cloud round-trip. The field PC must handle everything from scan to verdict.
Step-by-step flow
Step 1 — Load the design
The inspector selects the IFC file for this unit on the field PC. edgesentry-inspect::ifc loads the reference geometry into a design point cloud. This happens once per session and is cached for the rest of the inspection.
Step 2 — Scan the space
The inspector walks the room with the 3D sensor. The sensor streams a continuous point cloud (PointCloud) to the field PC. trilink-core::PoseBuffer records the sensor pose at each capture timestamp.
Step 3 — Project to depth map
For each sweep, trilink-core::project_to_depth_map converts the 3D point cloud to a 2D depth map (DepthMap). Simultaneously, trilink-core::project_to_height_map produces a top-down HeightMap of the floor area. These two images are streamed to the AI inference service.
Step 4 — AI inference (local GPU)
The inference service runs on the field PC’s GPU. It receives the depth map and height map as images and returns a Vec<Detection>: anomaly bounding boxes with class labels (e.g. rebar_missing, surface_void, misalignment) and confidence scores.
Step 5 — Restore 3D coordinates
For each detection, trilink-core::unproject maps the bounding box centre back to a world-space Point3D using the depth at that pixel and the pose recorded at capture time.
Step 6 — Compute deviation
edgesentry-inspect::deviation runs a k-d tree nearest-neighbour search: for every scan point, it finds the nearest design point and records the distance in millimetres. Points beyond the configured threshold (default 10 mm) are flagged.
Step 7 — Generate heatmap and report
edgesentry-inspect::heatmap projects the flagged points back to 2D with colour-coded deviation (green / yellow / red). edgesentry-inspect::report writes the JSON deviation report containing compliant_pct, max_deviation_mm, mean_deviation_mm, and the anomaly list.
Step 8 — Inspector reviews on site
The heatmap and report appear on the inspector’s tablet. The inspector sees exactly which elements failed and by how much. The pass/fail verdict is shown before they leave the room. Total elapsed time from scan start to verdict: target under 30 minutes.
Step 9 — Upload audit evidence
edgesentry-inspect::sync uploads the report JSON and heatmap PNG to the cloud audit store (S3 Object Lock WORM). The raw point cloud is not uploaded — only the report. The upload happens in the background and does not block the on-site verdict.
What makes this scenario difficult
| Challenge | Detail |
|---|---|
| 30-minute hard constraint | Every processing step must run on the field PC without cloud round-trips. The projection and deviation steps must complete in seconds, not minutes. |
| IFC geometry fidelity | IFC files for large buildings can be complex. The ifc.rs loader must extract only the relevant geometry for the current unit without loading the entire building model. |
| Partial scans | An inspector may not scan every surface perfectly. The deviation engine must report coverage (what percentage of the design surface was actually scanned) alongside deviation. |
| Occlusions | Scaffolding, equipment, and workers occlude the scene. Points behind foreground objects must not be mis-attributed to the design surface behind them. The Z-buffer in project_to_depth_map handles this correctly. |
| Alignment (registration) | The scan point cloud and the IFC design cloud live in different coordinate systems until aligned. The scanner’s SLAM map origin must be registered to the IFC global coordinate system before deviation can be computed. If done manually — by an operator identifying matching landmarks in the scan and the IFC model — the result depends on operator skill and introduces inconsistency between inspectors. The practical solution is fiducial markers (e.g. ArUco or AprilTag targets) placed at IFC-known coordinates before the inspection begins. The SLAM system detects the markers automatically and computes the registration without operator judgement. Manual alignment with 3 control points typically takes 5–15 minutes per unit; fiducial-assisted alignment reduces this to under 1 minute and removes operator variability. |
For detailed accuracy requirements for this scenario, see requirements.md.
Scenario 2: Maritime Structure Inspection
Context
An autonomous robot (wheeled, crawling, or swimming) conducts a routine inspection of a ship hull, dock structure, or confined-space area. The robot operates independently for the duration of a mission (30 minutes to several hours). Connectivity during the mission ranges from poor to zero — the robot cannot rely on a cloud connection for any decision that needs to happen in real time. Structural changes (new corrosion, deformation, missing fasteners) must be detected and flagged during the mission so the robot can revisit a flagged area or alert the control centre immediately.
The design reference in this scenario may be a previous scan (change detection) rather than an IFC file (deviation from design). Both modes are supported.
Step-by-step flow
Step 1 — Load the reference model
Before mission start, either (a) load an IFC hull design file, or (b) load a baseline point cloud from a previous inspection as the reference for change detection.
Step 2 — Robot begins mission
The robot navigates autonomously along a pre-planned inspection route. The 3D sensor streams a point cloud continuously. trilink-core::PoseBuffer records the sensor pose at each sweep timestamp.
Step 3 — Project and infer (continuous loop, on-board)
For each sweep, trilink-core::project_to_depth_map and AI inference and trilink-core::unproject run in sequence on the robot’s on-board processor. The target latency per sweep is under 2 seconds, so the robot can slow or stop near anomalies in real time.
Step 4 — Deviation / change detection
Scan points are compared against the reference model using edgesentry-inspect::deviation. The maritime threshold is 5 mm — hull deformation tolerance is tighter than construction concrete.
Step 5a — No anomaly: continue mission
The robot continues on the planned route. Scan data accumulates in the local deviation log.
Step 5b — Anomaly exceeds 2× threshold: immediate flag
edgesentry-inspect::sync emits a structural-change flag to the local message queue, or directly to the control centre via radio if connectivity is available at that moment. The robot can optionally slow, stop, or re-scan the flagged area.
Step 6 — Mission complete, robot docks
The robot returns to its docking station on the vessel or at the facility and connects to the vessel’s local network. edgesentry-inspect::sync uploads the full deviation report and heatmap PNG to the cloud digital twin store. The digital twin is updated with the new as-inspected geometry.
The outbound link depends on the operational context when the robot docks:
| Context | Link | Bandwidth | Feasibility for report (~1–6 MB) |
|---|---|---|---|
| Drydock or berth | Shore-side Ethernet / Wi-Fi | 10–1000 Mbps | Instant |
| At sea, within ~35 nm of shore | VDES terrestrial (VHF, ITU-R M.2092) | up to 307 kbps | ~30 sec – 3 min |
| At sea, beyond VHF range | S-VDES (satellite) or VSAT / Starlink Maritime | 100 kbps – 100 Mbps | seconds to minutes |
| Fallback | AIS messaging (legacy, pre-VDES) | ~10 kbps | marginal; JSON only, no PNG |
VDES is the IMO-standardised next-generation maritime data exchange system and the natural fit for ship-to-shore report delivery within port approaches and coastal waters. It is also the communications layer underpinning national port authority digital twin strategies, making it the recommended option for integration with those platforms. The raw point cloud is not uploaded — only the report. The 1–6 MB payload is well within VDES bandwidth even at the lower end of coastal range.
Step 7 — Control centre review
Engineers review the uploaded report. Flagged structural changes are prioritised for maintenance. The updated digital twin shows the current state of the asset.
What makes this scenario difficult
| Challenge | Detail |
|---|---|
| Zero connectivity during mission | No cloud calls are possible in confined spaces or underwater. Every decision must be made locally. The robot must buffer the full deviation log and sync it after docking. |
| AI model must run on robot hardware | The inference service runs on the robot’s on-board SoC (NVIDIA Jetson, similar). The model must be quantised to fit memory and compute constraints without degrading accuracy below the 5 mm detection threshold. This quantisation is an ongoing operational burden — every model update requires re-quantisation and re-validation. |
| Change detection vs. design deviation | For vessels where the as-built state already deviates from the original design (common in older ships), using the IFC design file as the reference produces false positives. Instead, a previously accepted baseline scan is used as the reference. The system must support both modes. |
| Pose accuracy in GPS-denied environments | SLAM accuracy degrades in featureless confined spaces (smooth hull plates, flooded bilge tanks). Pose drift accumulates over a long mission and degrades the 5 mm position accuracy requirement. Loop closure or fiducial markers must be used at regular intervals. |
| Variable lighting and surface conditions | Corrosion, marine growth, and water accumulation on hull surfaces affect the point cloud density and AI inference quality differently than a clean construction site. |
For detailed accuracy requirements for this scenario, see requirements.md.
Deployment Comparison
| Aspect | Scenario 1 (Construction) | Scenario 2 (Maritime) |
|---|---|---|
| Connectivity | Available (field PC on Wi-Fi or LTE) | Not available during mission |
| Reference model | IFC design file | IFC or previous scan (change detection) |
| Deviation threshold | 10 mm | 5 mm |
| Time constraint | 30 min (hard) | Per-mission (hours); verdict not needed on site |
| Inference hardware | Field PC GPU (no quantisation) | Robot SoC (quantisation required) |
| On-site feedback | Inspector tablet / AR headset | Robot slows / stops at anomaly; flag to control centre |
| Cloud sync trigger | After verdict, in background | After docking |
| Model update cadence | Update inference endpoint only | Re-quantise + re-validate + redeploy to robot |
| Alignment complexity | SLAM → IFC registration; fiducial markers strongly recommended to remove operator variability | Same, plus SLAM drift management over long missions |
| EdgeSentry-Inspect code change | None | None — inference.base_url points to localhost; threshold configured |
| Overall difficulty | Medium | High |
The EdgeSentry-Inspect codebase is identical for both scenarios. The difficulty difference is almost entirely in the inference hardware layer (quantisation for Scenario 2) and the connectivity layer (offline buffering for Scenario 2).
Case Studies
Case Study A — High-rise apartment handover inspection (Scenario 1)
Operator: A main contractor delivering a 40-storey residential tower.
Environment: Indoor apartment units, typically 60–90 m² each. 320 units total. Elevator access. 240V power available throughout.
Problem: Before handover to the developer, every unit must pass a structural inspection: wall flatness, floor levelness, ceiling height, opening dimensions. Currently, a three-person team spends 45–60 minutes per unit using spirit levels and tape measures. At 320 units the total inspection takes 6–8 weeks. Disputes about marginal non-conformances are common.
Deployment:
- Inspector brings a field PC and a handheld 3D sensor into each unit.
- IFC model for the floor plate is pre-loaded on the field PC (one file covers all units on that level with offsets).
- Inspector scans the unit in a single walk-through (approximately 15 minutes).
- EdgeSentry-Inspect produces a deviation report showing all elements outside the 10 mm tolerance within 5 minutes of scan completion.
- Total time per unit: ~20 minutes including setup.
Outcome:
- Inspection time reduced from 45–60 minutes to 20 minutes per unit (55% reduction).
- Non-conformances are documented with millimetre-level precision and photographic evidence — disputes are resolved by the data, not by argument.
- Report is uploaded to the project’s common data environment and linked to the IFC model automatically.
Why this scenario is straightforward: Stable indoor environment, good sensor range, no connectivity constraint. The field PC has a standard GPU. No model quantisation required.
Case Study B — Public infrastructure: MRT station concourse (Scenario 1)
Operator: A civil engineering contractor completing a new metro station.
Environment: Large open concourse, ~2,000 m² floor area, ceiling height 8–12 m. Construction is ongoing in adjacent areas. Equipment and workers present during inspection windows.
Problem: Inspection windows are narrow (2–4 hours at night) due to ongoing construction. A full flatness and alignment survey of all structural elements must be completed within the window. Traditional total-station survey takes 6–8 hours for a space this size. The station cannot open without a signed-off deviation report for each structural element.
Deployment:
- A rover-mounted 3D sensor is driven through the concourse by a single operator.
- The 8–12 m ceiling height requires a sensor with longer range than a handheld device (trade-off: lower density at distance).
- EdgeSentry-Inspect adjusts the deviation threshold dynamically by element type: 5 mm for column faces, 15 mm for wall panels at ceiling height.
- The full concourse scan is completed in 90 minutes; the deviation report is ready 10 minutes after the scan ends.
Key complexity vs. Case Study A:
- Dynamic threshold by element type (not a single global threshold).
- Large scan area requires stitching multiple sweeps (the SLAM system handles this; EdgeSentry-Inspect receives a unified point cloud).
- Worker and equipment occlusions are higher — coverage reporting is critical to flag under-inspected areas.
Case Study C — Drydock hull inspection (Scenario 2)
Operator: A ship repair yard conducting a class renewal survey for a 180-metre bulk carrier.
Environment: Ship in drydock. Hull is accessible from ground level and via scaffolding. Some confined ballast tank spaces require a crawling robot. No cellular coverage inside the tanks.
Problem: A class renewal survey requires documenting the thickness and surface condition of the entire hull. Traditional ultrasonic thickness gauging and visual inspection requires 30–40 surveyors working for 3–5 days. The yard wants to reduce survey time to 1 day and produce a digital record that can be compared against the vessel’s previous survey.
Deployment (external hull, drydock):
- A wheeled robot with a 3D sensor and on-board GPU crawls the external hull surface.
- Reference model: previous survey scan (3 years ago), not the original IFC (the vessel has been modified since build).
- EdgeSentry-Inspect runs in change-detection mode: new scan vs. previous scan.
- Deviation > 5 mm (interpreted as surface wastage or deformation) triggers an immediate flag to the yard control room via radio (connectivity available for external hull in drydock).
- Robot completes the external hull in 8 hours. Report uploaded at mission end.
Deployment (confined ballast tanks):
- A smaller crawling robot enters the tank through the access manhole.
- No connectivity inside the tank.
- Deviation flags accumulate in the local buffer.
- When the robot exits the tank, deviation log is synced automatically.
- Control room reviews all tank reports after the tank inspection session.
Outcome:
- Survey time reduced from 3–5 days to approximately 28 hours (hull + tanks).
- Digital deviation map is directly comparable against the previous survey — structural change over the 3-year period is immediately visible as a colour-coded overlay.
- Classification society accepts the digital report as primary evidence (paper sketches are no longer required).
Key complexity vs. Case Study A:
- Two sub-scenarios in one deployment: connected external hull + disconnected confined tanks.
- Quantisation required for the crawling robot SoC.
- Change detection mode instead of IFC deviation mode.
- Sync-after-docking logic must handle partial reports gracefully (robot may need to exit and re-enter a tank multiple times).
Recommended Implementation Order
Implement Scenario 1 (construction, connected) first.
Rationale
-
The 30-minute constraint validates the entire edge pipeline. If the full cycle — project → infer → unproject → deviation → report — can run within 30 minutes on a field PC, Scenario 2 (which has no hard on-site time constraint) is straightforwardly achievable with the same code.
-
No quantisation dependency. Scenario 1 runs on a standard field PC GPU. There is no dependency on a robot hardware team or model quantisation toolchain. The pipeline can be built, tested, and demonstrated without hardware partners.
-
IFC deviation mode is the foundation for change-detection mode. Scenario 2’s change-detection mode (new scan vs. previous scan) reuses the entire deviation engine — the “design reference cloud” is simply replaced by a previous scan cloud. Implementing IFC deviation first means Scenario 2 requires no structural code change.
-
A working Scenario 1 deployment is the proof of value needed to justify Scenario 2 investment. Convincing a ship repair yard or port authority to trial an autonomous robot requires evidence that the AI + deviation pipeline produces reliable results. A construction site handover inspection (lower operational complexity, easier access, controlled environment) is the right first deployment to generate that evidence.
-
Scenario 2 adds dependencies outside EdgeSentry-Inspect’s control. Robot SoC quantisation, SLAM accuracy in GPS-denied environments, and mission planning are all provided by the robot platform partner. Those integrations are easier to negotiate and execute after a live Scenario 1 deployment has demonstrated the pipeline’s accuracy.
Suggested phasing
| Phase | Scenario | Target use case | Prerequisite |
|---|---|---|---|
| Phase 1 | Construction site inspection | Apartment handover, civil infrastructure | trilink-core #30–#34 merged; M2–M4 complete |
| Phase 2 | Maritime — external (connected) | Drydock hull survey, dock structure | Phase 1 reference deployment; at least one confirmed customer |
| Phase 3 | Maritime — confined (offline robot) | Ballast tanks, engine rooms, underwater | Phase 2 complete; robot partner confirms quantisation and offline sync |
Phase 3 requires no changes to EdgeSentry-Inspect code. The investment is entirely in the robot platform integration layer (quantisation, fleet management, sync-after-docking retry logic) — work that is justified by Phase 2 results.
For a breakdown of the factors that determine measurement accuracy in the field, see architecture.md.
EdgeSentry-Inspect — Architecture
Edge-cloud split
┌──────────────────────────────────────────────────────────┐
│ FIELD PC (Edge) │
│ │
│ 3D sensor (LiDAR / ToF) │
│ │ point cloud (PointCloud) │
│ ▼ │
│ trilink-core::project_to_depth_map │
│ trilink-core::project_to_height_map │
│ │ DepthMap HeightMap │
│ ▼ │
│ AI inference (built-in model or HTTP endpoint) │
│ │ Vec<Detection> (BBox2D + class + confidence) │
│ ▼ │
│ trilink-core::unproject │
│ │ world-space Point3D per detection │
│ ▼ │
│ edgesentry-inspect::ifc — IFC geometry │
│ edgesentry-inspect::deviation — deviation (mm) │
│ edgesentry-inspect::heatmap — heatmap PNG │
│ edgesentry-inspect::report — JSON report │
│ │ │
│ ├── displayed on tablet / AR headset immediately │
└──────┬───────────────────────────────────────────────────┘
│ report JSON + heatmap PNG (not raw point cloud)
▼
┌──────────────────────────────────────────────────────────┐
│ CLOUD (Audit Store / Digital Twin) │
│ │
│ edgesentry-inspect::sync │
│ │ S3-compatible upload (Object Lock WORM) │
│ │ structural-change flag → message queue │
│ ▼ │
│ Audit report store — immutable evidence │
│ Digital twin update — as-built IFC delta │
│ Central dashboard — fleet-wide deviation trends │
└──────────────────────────────────────────────────────────┘
What runs on the field PC
| Step | Why edge |
|---|---|
| 3D → 2D projection | Point clouds are gigabytes; projecting locally avoids upload before verdict |
| AI inference | Sub-second latency; local GPU; works offline |
| 2D → 3D unprojection | Needed for on-site AR feedback |
| IFC load + deviation computation | Inspector must see deviation before leaving the site |
| Heatmap + report generation | Report is the upload artefact; must be ready on site |
What goes to the cloud
| Data | Why cloud |
|---|---|
| Deviation report (JSON) | Immutable audit evidence; regulatory archive |
| Heatmap (PNG) | Human-readable evidence attached to the report |
| Structural-change flag | Real-time alert to central monitoring (UC-2) |
| As-built IFC delta | Persistent update to the digital twin asset model |
Component design
edgesentry-inspect::ifc
- Input: IFC file path (
.ifc) - Output:
Vec<Point3D>— design reference point cloud sampled from wall/slab/column geometry - Implementation:
ifcopenshellvia Python FFI (pyo3) or a native Rust IFC reader - The reference cloud is loaded once per inspection session and cached in memory
edgesentry-inspect::deviation
- Input: scan
Vec<Point3D>(fromtrilink-core::unproject) + designVec<Point3D>(fromifc) - Output: per-scan-point deviation
f32in metres - Algorithm: k-d tree nearest-neighbour search (
kiddocrate); O(n log m) per scan - Threshold: configurable (default 10 mm for construction, 5 mm for maritime hull)
edgesentry-inspect::heatmap
- Input: scan points + per-point deviation values
- Output: PNG image — deviation mapped to colour (green ≤ threshold, yellow 2×, red 4×+)
- Reuses
trilink-core::project_to_depth_mapto position coloured points in 2D
edgesentry-inspect::report
JSON schema:
{
"compliant_pct": 94.2,
"max_deviation_mm": 23.1,
"mean_deviation_mm": 3.8,
"point_count": 142850,
"threshold_mm": 10.0
}
AI detection locations are written to points.json alongside the report:
{
"scan_points": [
{ "x": 12.3, "y": 4.1, "z": 2.05, "deviation_mm": 23.1 }
],
"detections": [
{ "x": 12.3, "y": 4.1, "z": 2.05 }
]
}
edgesentry-inspect::sync
- Uploads report JSON and heatmap PNG to an S3-compatible audit store (Object Lock WORM)
- Emits a structural-change flag to a message queue (SQS or MQTT) when any anomaly exceeds 2× the configured threshold
- Reuses the S3-compatible interface pattern from
edgesentry-rs
AI inference modes
EdgeSentry-Inspect supports two inference backends, selected by inference.mode in config.toml. Both produce the same Vec<Detection> output consumed by the rest of the pipeline.
Built-in model (inference.mode = "builtin")
A lightweight defect-detection model bundled with EdgeSentry-Inspect. Runs in-process via ONNX Runtime — no external server or network access required.
- Input:
DepthMap+HeightMapimages produced bytrilink-core - Output:
Vec<Detection>— bounding boxes with class labels and confidence scores - Initial class coverage:
surface_void,misalignment,rebar_exposure - Hardware: runs on a standard field PC CPU; no dedicated GPU required for basic use
Use builtin for getting started quickly, offline-only deployments, or when no vendor model is available.
External HTTP endpoint (inference.mode = "http")
The inference client POSTs the depth map and height map to inference.base_url and receives a detection list. The endpoint can be:
- A vendor’s model server running locally on the field PC or robot (same host, no internet needed)
- A specialised cloud inference API (Scenario 1 / connected deployments only)
This mode is the integration point for vendor collaboration. Vendors implement the server side with their own model; EdgeSentry-Inspect calls it with a fixed schema. The operator sets inference.base_url in config — no code change required.
Interface contract:
POST /detect
Content-Type: multipart/form-data
depth_map: <PNG bytes>
height_map: <PNG bytes>
200 OK
[{"x":120,"y":45,"w":30,"h":20,"class":"surface_void","confidence":0.87}, ...]
| Mode | When to use |
|---|---|
builtin | No vendor model; offline-only; getting started |
http — local vendor server | Partner model on the same device; no internet needed |
http — cloud API | Scenario 1 (connected); vendor hosts the model remotely |
Optional: cryptographically verifiable audit records
If the inspection context requires mathematically verifiable, tamper-evident audit records — for example, regulatory submissions where a third party must independently verify that a report was not altered after the fact — the deviation report can be signed and hash-chained using edgesentry-rs.
edgesentry-rs provides:
| Capability | How it applies to EdgeSentry-Inspect |
|---|---|
| Ed25519 payload signing | The field PC signs each deviation report with a device key stored in a hardware secure element — proof that the report came from a specific sensor device |
| BLAKE3 hash chaining | Each report carries prev_record_hash, forming a chain — a missing or reordered report is immediately detectable |
| Sequence monotonicity | Report sequence numbers are strictly increasing — replay and deletion are cryptographically detectable |
IngestService::ingest() | Cloud-side gate re-verifies signature and hash chain on upload — rejects tampered or out-of-sequence reports |
This layer is opt-in. For standard construction inspections, the S3 Object Lock WORM store (edgesentry-inspect::sync) is sufficient. For high-assurance contexts (maritime hull certification, legally binding structural sign-off), wrapping the report in an edgesentry-rs AuditRecord before upload provides a cryptographic audit trail that can be verified independently of EdgeSentry-Inspect infrastructure.
Accuracy factors
Target accuracy is 10 mm for construction (UC-1) and 5 mm for maritime (UC-2). The following table shows the main factors that determine measurement accuracy in the field and how each is mitigated.
| Factor | Impact | Mitigation |
|---|---|---|
| 3D sensor accuracy | Primary driver | Use a sensor rated for the target accuracy at the required range |
| SLAM pose accuracy | Propagates into deviation computation | Loop closure at regular intervals; fiducial markers in featureless spaces |
| IFC alignment error | Shifts the entire deviation map | Use ≥ 3 known control points for IFC-to-SLAM registration; verify residuals < 2 mm. For consistent results regardless of operator, place fiducial markers (ArUco / AprilTag) at IFC-known coordinates before the inspection — the SLAM system detects them automatically and removes manual judgement from the registration step. |
| Projection round-trip error | Verified < 1 mm by trilink-core round-trip test (#34) | Arithmetic error is not a significant contributor |
| k-d tree resolution | Nearest-neighbour search accuracy | Design cloud sampled at ≤ 2 mm pitch (finer than the detection threshold) |
Technology summary
| Component | Language | Key dependencies |
|---|---|---|
edgesentry-inspect (deviation engine) | Rust | trilink-core, kiddo (k-d tree), image (PNG), pyo3 (IFC via Python) |
edgesentry-inspect (CLI) | Rust | clap, tokio, reqwest (inference client), serde_json |
edgesentry-inspect (cloud sync) | Rust | S3-compatible HTTP client (reuse edgesentry-rs interface) |
| IFC geometry | Python (via pyo3) | ifcopenshell |
| AI inference — built-in | Rust + ONNX Runtime | Bundled lightweight defect detection model (ort crate) |
| AI inference — external | HTTP (reqwest) | Vendor endpoint: POST image → Vec<BBox2D>; local or cloud |
| Cloud audit store | AWS | S3 + Object Lock (WORM), SQS |
Open datasets for PoC
| Domain | Dataset | Purpose |
|---|---|---|
| Construction | BIMNet (public IFC models) | Reference design geometry for scan-vs-BIM |
| Construction | ETH3D / S3DIS point clouds | Sample scan clouds for deviation testing |
| Maritime | MBES survey data | Hull scan point clouds |
| General | NYU Depth V2 | Depth map validation for projection correctness |
Inspect — Roadmap
Release tracks
| Track | Scope | Audience |
|---|---|---|
| OSS (this repo) | trilink-core (3D/2D projection, deviation engine), edgesentry-audit, edgesentry-inspect (CLI) | Developers, researchers |
| Commercial (closed commercial repository) | Inspect App (Tauri/GUI), compliance reports, partner sensor plugins | Site supervisors, inspectors, regulators |
All milestones in this document ship as open-source. Commercial milestones are tracked in the commercial compliance layer.
Ecosystem Strategy
Following the DuckDB model — keep algorithms, tools, and specifications as open as possible so that adoption spreads through the ecosystem rather than through lock-in.
Why maximise the open core
Publishing the deviation engine, projection algorithms, and CLI in full allows researchers, field engineers, regulators, and partner companies to verify, integrate, and extend independently. Transparency in the algorithms is itself the source of trust — it establishes Inspect as public infrastructure for construction inspection that no single vendor controls.
Co-creating standards with regulators
Regulators — BCA, CSA, MLIT — are partners in building construction quality standards, not gatekeepers to route around. Implementing CLS / JC-STAR / CONQUAS compliance up front is a commitment to taking those standards seriously, and an invitation for independent third-party validation of the OSS core’s quality. That trust relationship accelerates international ecosystem adoption.
Foundation (trilink-core repo)
The following are prerequisites for all Inspect milestones.
They are tracked and implemented in the trilink-core repository.
| Issue | Deliverable | Status |
|---|---|---|
| #30 | PointCloud, DepthMap, HeightMap types | Done |
| #31 | project_to_depth_map (3D → depth map) | Done |
| #32 | project_to_height_map (3D → height map) | Done |
| #33 | docs/math.md forward projection sections | Done |
| #34 | Project → unproject round-trip tests | Done |
| #39 | HeightMap dimension naming (cols/rows → width/height) | Done |
| #40 | Coordinate precision decision (Point3D stays f32) | Done |
| #38 | Adopt glam for Transform4x4 / Point3D (SIMD, inversion) | Done |
All foundation items are merged. M2 is also complete. M3 is unblocked.
M2 — IFC Loader and Deviation Engine [OSS] ✅ Implemented
Goal: Given a scanned point cloud and an IFC design file, compute a per-point deviation in millimetres.
Deliverables:
Cargo.toml— workspace root; member:crates/edgesentry-inspectsrc/ifc.rs— load IFC geometry asVec<Point3D>(design reference cloud)src/deviation.rs— k-d tree nearest-neighbour deviation; configurable thresholdsrc/report.rs— JSON report serialisation (schema in architecture.md)- Integration test: load sample IFC fixture → compute deviation against known scan cloud → assert
compliant_pct,max_deviation_mm,mean_deviation_mm
M3 — Heatmap Rendering [OSS] ✅ Implemented
Goal: Produce a PNG heatmap that maps per-point deviation to colour, positioned in 2D using the depth map projection.
Deliverables:
src/heatmap.rs— deviation → RGB colour (green ≤ threshold, yellow 2×, red 4×+) → PNG viaimagecrate- Reuses
trilink-core::project_to_depth_mapto position each coloured point in 2D - Integration test: known deviation values → verify expected pixel colours at expected positions in output PNG
M4 — Field PC Pipeline (CLI) [OSS] ✅ Implemented
Goal: End-to-end pipeline on the field PC from point cloud to deviation report, runnable as a single CLI command.
Deliverables:
src/main.rs— CLI:edgesentry-inspect scan --config config.toml- Wires: point cloud ingress (
trilink-core::FrameSource) →project_to_depth_map→ AI inference client →unproject→ deviation → heatmap → report - Config: IFC file path,
inference.mode(builtin|http), inference endpoint URL (ifhttp), deviation threshold, output directory - End-to-end test with
MockSource+ mock inference server: report produced, all fields correct, heatmap PNG written
M5 — Cloud Sync [OSS]
Goal: Upload the deviation report and heatmap to an S3-compatible store; emit structural-change flags.
Deliverables:
src/sync.rs— S3-compatible upload (standard PUT); structural-change flag → SQS or MQTT when anomaly exceeds 2× threshold- Integration test: mock S3 + mock SQS → assert report uploaded, flag published for above-threshold anomaly, no flag for below-threshold
M6 — Built-in Inference Model [OSS]
Goal: Ship a lightweight ONNX defect-detection model with Inspect so that inference.mode = "builtin" works out of the box without an external server.
Deliverables:
src/inference/mod.rs—InferenceBackendtrait; dispatches to built-in or HTTP based oninference.modesrc/inference/builtin.rs— ONNX Runtime runner (ortcrate); loads bundled model weightssrc/inference/http.rs— HTTP client extracted from M4 into the same module for paritymodels/detect.onnx— initial model coveringsurface_void,misalignment,rebar_exposure- Integration test: run built-in model on a sample depth map → assert detections are non-empty and class labels are valid
Known Limitations
The following constraints are inherent to the current design. They are documented in full in trilink-core/docs/limitations.md.
| # | Limitation | Affected milestone | Workaround |
|---|---|---|---|
| L1 | Single-viewpoint occlusion — Z-buffer projection discards surfaces not visible from the capture pose | M3, M4 | Fuse multiple poses before projection; monitor NaN fraction in depth map |
| L2 | Height map is protrusion-only — maximum-Z aggregation misses depressions (spalling, section loss) | M3 | Use deviation engine (M2) for depression detection; height map is supplementary |
| L3 | Curved-surface back-projection bias — unproject assumes a flat plane; ~11.7% relative error on cylinders/arches vs ~2.5% on flat surfaces | M4, M6 | Flag detections on high-curvature regions; apply expanded tolerances |
| L4 | f32 precision outside local frame — coordinates must be in a local tangent-plane frame; UTM/WGS-84 input silently degrades to ~12 mm steps | Foundation | Subtract site origin before constructing Point3D; see trilink-core/docs/math.md |
| L5 | Depth-only inference — built-in ONNX model uses depth map only; no RGB channel; ~76% F1 vs ~87% achievable with RGB-D fusion | M6 | Planned RGB-D extension to InferenceBackend; FusionPacket.image_jpeg already available |
| L6 | Fallback depth degrades localisation — fallback_depth_m = 2.0 m when no sensor reading; position error ∝ ` | true_depth − 2.0 | ` |
| L7 | Pose buffer dead zone — inference results arriving >200 ms after capture, or after >33 s buffer window, are silently dropped | Foundation | Monitor world_pos = None rate; log warn on tolerance vs. buffer-exhausted failures |
| L8 | Not yet near-visual-inspection equivalent — no documented MLIT/CONQUAS equivalence test; no IFC 4.3 metadata write-back in OSS layer yet | — | Addressed by the commercial compliance layer |
RGB-D Fusion (M6 enhancement)
The built-in inference model (M6) will be extended to accept an optional RGB channel alongside the depth map, forming an RGB-D input tensor. The FusionPacket already carries image_jpeg; the main change is in the inference module and model retraining.
Published benchmarks on concrete infrastructure damage detection show the impact:
| Input | F1 |
|---|---|
| 2D RGB only | 67.6% |
| 3D depth only | 76.0% |
| RGB-D fused | 86.7% |
This item is tracked as part of M6. It does not change the InferenceBackend trait signature — the RGB tensor is passed as an optional additional channel.
Demo Pipeline
Goal: Run a fully self-contained end-to-end demonstration of the Inspect CLI using open datasets — no production hardware or data required.
Prerequisites: M2, M3, M4 (CLI must be built and on PATH).
Steps:
- Download a public IFC file (buildingSMART BIMNet gallery) and an indoor LiDAR scan (S3DIS dataset).
- Use IfcOpenShell to sample the IFC surface into a reference point cloud.
- Use Open3D to introduce a controlled 15 mm deformation, producing a simulated scan with a known defect.
- Run
edgesentry-inspect scan --config config.toml— the CLI loads the IFC, computes deviation, projects to a depth map, calls the HTTP inference server, back-projects detections, renders a heatmap, and writes the JSON report. - Inspect
report.json(compliant_pct,max_deviation_mm,mean_deviation_mm) and the PNG heatmap to verify the simulated defect is detected and quantified.
See Demo Pipeline for the full walkthrough.
Audit layer — ISO 19650
The ISO 19650 information container schema (BIM status transitions, conformant payload, third-party BIM tool interoperability) is implemented in the edgesentry-rs crate, not here.
See edgesentry-audit roadmap — Milestone 2.7 for the implementation plan.
Dependency graph
trilink-core #30, #31, #32, #33, #34 (foundation — complete)
└── M2 (IFC loader + deviation engine) [OSS]
└── M3 (heatmap rendering) [OSS]
└── M4 (field PC pipeline CLI) [OSS]
├── M5 (cloud sync) [OSS]
├── M6 (built-in inference model) [OSS]
└── Demo Pipeline (open datasets + CLI)
Commercial milestones (M4.5, M7, M8) → commercial compliance layer
Phase 2 (2D/MPA/JTC, 1D/NEA/PUB) → commercial compliance layer
The Phase 2 expansion (YOLO11/SAM 2 for 2D maritime/industrial, PatchTST/iTransformer for 1D time-series) is tracked in the commercial compliance layer. Development priority is 3D demo first, then 2D, then 1D.
Demo Pipeline
This page describes how to build a self-contained proof-of-concept demonstration using open datasets and the Inspect CLI. It is intended for use in technical evaluations and field demos before production data is available.
Open datasets
| Asset | Source | Notes |
|---|---|---|
| IFC design model | buildingSMART BIMNet gallery | Publicly shared IFC files from BIM award entries |
| 3D point cloud | S3DIS (Stanford Large-Area Indoor Spaces) | Indoor LiDAR scans of real buildings; well-suited for structural inspection scenarios |
Verify any IFC download URL before use. The buildingSMART gallery is the authoritative source; third-party mirrors may serve modified files.
Pipeline steps
Step 1 — Generate design point cloud from IFC
Use IfcOpenShell to sample the IFC surface geometry into a reference point cloud (the “ground truth” design). Each IfcProduct element is triangulated and its vertices collected into a flat (N, 3) array representing the design surface.
Step 2 — Simulate a damaged scan
Use Open3D to introduce controlled deformations into a copy of the design cloud, producing a simulated “as-built” scan with known defects. A representative demo deforms a localised region by 15 mm to simulate a surface depression, then saves the result as a PLY file.
Step 3 — Compute deviation (M2)
Run the eds inspect scan CLI command, pointing it at the IFC design file and the simulated scan PLY. The CLI calls src/ifc.rs to load the design reference cloud, then src/deviation.rs to compute per-point nearest-neighbour deviation and emit a JSON report containing compliant_pct, max_deviation_mm, and mean_deviation_mm.
This step exercises src/ifc.rs and src/deviation.rs (M2).
Step 4 — Project 3D → 2D (trilink-core)
trilink-core::project_to_depth_map converts the scan point cloud into a depth map image for AI inference input. This is handled automatically by the CLI using the camera intrinsics in config.toml — no manual step is required.
This step exercises trilink-core::project_to_depth_map (foundation #31).
Step 5 — AI defect detection
A detection model runs over the depth map via the HTTP inference path (inference.mode = "http"). For demos, YOLOv8 can be used as the external inference server. The CLI sends the depth map image to the configured HTTP endpoint and receives bounding-box detections in return (M4).
Step 6 — Back-project 2D → 3D
Detected 2D bounding boxes are back-projected to world coordinates using trilink-core::unproject, then overlaid on the 3D model and included in the deviation report (M4).
Deviation engine in the demo
The deviation engine (M2) is the quantitative centrepiece of the demo. It answers the question “by how many millimetres does the as-built structure deviate from the IFC design?” — not just “is there an anomaly?”. Make sure Step 3 is demonstrated explicitly, as it differentiates this pipeline from a generic defect detector.
Tech stack summary
| Component | Language / Library | Roadmap milestone |
|---|---|---|
| IFC surface sampling | Python / IfcOpenShell | Demo setup (pre-M2) |
| Damage simulation | Python / Open3D | Demo setup only |
| IFC deviation engine | Rust CLI / src/ifc.rs, src/deviation.rs | M2 |
| 3D ↔ 2D projection | Rust / trilink-core | Foundation #31–#32 |
| AI defect detection | External HTTP server (e.g. YOLOv8) | M4 inference.mode = "http" |
| Report + heatmap | Rust CLI / src/report.rs, src/heatmap.rs | M2–M3 |
CLI Reference
eds inspect runs the M4 field PC pipeline: IFC reference + PLY scan → deviation → optional AI inference → heatmap + report.
Installation
For end users — Homebrew (macOS / Linux)
brew install edgesentry/tap/eds
For end users — pre-built binary
Download the latest release from the GitHub Releases page.
| Platform | File |
|---|---|
| Linux (x86-64) | eds-{version}-x86_64-unknown-linux-gnu.tar.gz |
| macOS (Apple Silicon) | eds-{version}-aarch64-apple-darwin.tar.gz |
| Windows (x86-64) | eds-{version}-x86_64-pc-windows-msvc.zip |
Extract and place the eds binary on your PATH:
# Linux / macOS
tar -xzf eds-{version}-{target}.tar.gz
sudo mv eds /usr/local/bin/
eds --help
# Windows (PowerShell)
Expand-Archive eds-{version}-x86_64-pc-windows-msvc.zip
# Move eds.exe to a directory in your PATH
eds --help
For developers — install from source
Requires Rust (stable toolchain).
cargo install --git https://github.com/edgesentry/edgesentry-rs --locked --bin eds
eds inspect scan
Run a full scan pipeline from a TOML config file:
eds inspect scan --config config.toml
| Flag | Description |
|---|---|
-c, --config | Path to the TOML configuration file (required) |
Config file format
ifc_path = "path/to/design.ifc"
scan_path = "path/to/scan.ply"
[camera]
fx = 525.0
fy = 525.0
cx = 319.5
cy = 239.5
width = 640
height = 480
[inference]
mode = "off" # "off" or "http"
# endpoint = "http://localhost:8000/infer" # required when mode = "http"
[output]
dir = "out"
See config.example.toml for an annotated example.
Output files
| File | Description |
|---|---|
out/report.json | compliant_pct, max_deviation_mm, mean_deviation_mm, optional detections |
out/heatmap.png | Per-point deviation heatmap (blue = compliant, red = exceeds threshold) |
Inference modes
mode = "off" — deviation and heatmap only; no AI call.
mode = "http" — depth map is POSTed as a PNG to endpoint; the server must return a JSON array of bounding boxes:
[{"x": 10, "y": 20, "w": 50, "h": 60}, ...]
Detected regions are back-projected to world coordinates via trilink-core::unproject and included in report.json.
Building with optional features
The eds inspect scan command has no extra feature flags. Transport features (transport-http, transport-tls, etc.) apply only to eds audit serve* commands.
# default build — inspect scan works out of the box
cargo build -p eds
# with audit HTTP transport as well
cargo build -p eds --features transport-http
Contributing to EdgeSentry Inspect
Consistency Check
After every change — whether to code, tests, scripts, or docs — check that all three layers stay in sync:
- Code → Docs: If you add, remove, or rename a module, function, CLI command, or behavior, update all docs that reference it (
architecture.md,cli.md,demo.md,roadmap.md). - Docs → Code: If a doc describes a feature or command, verify it exists and works as described. Stale examples and wrong cargo feature names cause CI failures.
- Scripts → Code: If you rename a test file or cargo feature, update every script and workflow that references it (e.g.
.github/workflows/ci.yml).
A quick grep before opening a PR:
# Find docs that mention a symbol you changed
grep -r "<old-name>" docs/ scripts/ .github/
Crate layout
| Crate | Purpose |
|---|---|
edgesentry-inspect | IFC loader, deviation engine, heatmap renderer, JSON report |
eds | Unified CLI binary — eds inspect scan entry point |
trilink-core | Point cloud projection / unprojection (upstream dependency) |
Issue Labels
Every issue should carry one type label, one priority label, and one or more category labels.
Type labels
| Label | When to use |
|---|---|
bug | Something is broken or behaves incorrectly |
enhancement | New feature or improvement to existing behavior |
documentation | Docs-only change — no production code affected |
Priority labels
| Label | Meaning | Examples |
|---|---|---|
priority:P0 | Must have — blocks a release or core pipeline functionality | Broken IFC loader, deviation engine panic, CLI crash on valid input |
priority:P1 | Nice to have — high value, scheduled for near-term | Built-in inference model, demo walkthrough, visualisation prototype |
priority:P2 | Good to have — valuable but deferrable | Compliance report generation, partner sensor plugins |
priority:P3 | Low priority — improvements with no urgency | CI optimisations, minor DX improvements |
When in doubt, ask: “Does this block a user from running eds inspect scan end-to-end?” If yes → P0. If it materially improves the experience → P1. If it is a milestone feature that can ship later → P2.
Category labels
| Label | When to use |
|---|---|
core | Deviation engine, IFC geometry, heatmap, report serialisation |
compliance-governance | CONQUAS / MLIT report generation, ISO 19650 integration |
devsecops | CI/CD pipelines, static analysis, release automation |
platform-operations | Field PC deployment, cloud sync, infrastructure |
hardware-needed | Requires physical LiDAR / ToF sensor hardware (always pair with priority:P2) |
Pull Request Conventions
Always assign the PR to its author:
gh pr create --assignee "@me" --title "..." --body "..."
Mandatory: Run Tests After Every Code Change
After every code change, run:
cargo test --workspace
Do not consider a change complete until all tests pass.
Running Tests
Prerequisites (macOS)
brew install rustup-init
rustup-init -y
source "$HOME/.cargo/env"
rustup default stable
Unit tests
# All crates
cargo test --workspace
# Inspect crate only
cargo test -p edgesentry-inspect
Integration tests (CLI end-to-end)
cargo test -p eds --features transport-http,transport-tls --test cli_integration
Static Analysis and License Check
Run before opening a PR:
# Lint
cargo clippy --workspace --all-targets --all-features -- -D warnings
# Security advisories
cargo audit
# OSS license policy
cargo deny check licenses
Avoiding Conflicts with Main
Before starting work:
git fetch origin
git checkout main && git pull origin main
git checkout -b <your-branch>
Keep your branch up to date — rebase onto main before opening a PR:
git fetch origin
git rebase origin/main
Files most likely to conflict — coordinate before editing these:
| File | Why it conflicts often |
|---|---|
docs/inspect/en/src/demo.md | Multiple PRs extend the demo walkthrough |
docs/inspect/en/src/cli.md | Updated whenever CLI flags or subcommands change |
docs/inspect/en/src/roadmap.md | Milestone status updated as work completes |
.github/workflows/ci.yml | Touched by both feature and CI improvement PRs |