rFabric

Quickstart

This is the shortest practical path through the platform. The goal is not to force full migration on day one. The goal is to prove value with one real workflow while preserving the structure needed to expand into a full lifecycle system later.

Step 1 — Connect A Data Source

Start with the least disruptive entry point that matches how your team already collects data.

  • robot-side agent for continuous operational collection
  • direct cloud import for historical or edge-synced data
  • API, SDK, or CLI for developer-controlled ingestion

Platform surfaces involved

  • Robot Data Ingestion
  • Platform API
  • Identity, Access & Governance

What matters here

Capture robot identity, operator context, site or environment metadata, and source lineage at intake. That context becomes important much later when training, deployment, or incident analysis needs to trace back to the source.

Step 2 — Process And Review The Data

Once data lands in the platform, process it into canonical episodes and inspect it in synchronized replay.

  • validate streams and timelines
  • inspect sensor completeness and alignment
  • review failures, anomalies, and obvious low-value sessions

Platform surfaces involved

  • Data Processing Pipeline
  • Data Explorer
  • Workflow Engine

What matters here

Processing should separate upload reliability from data correctness. Review should happen on the same structured entities that later labeling and curation will use.

Step 3 — Label What Matters

Apply the annotation schema needed for the task and the release process.

  • episode-level success and quality labels
  • temporal phase and failure window labels
  • frame or spatial labels when the task needs them
  • structured review and approval of labeling quality

Platform surfaces involved

  • Annotation & Labeling
  • Data Explorer
  • Identity, Access & Governance

What matters here

Labels should remain attached to source data, schema version, and review state so they stay usable when data is reprocessed or promoted into new dataset versions.

Step 4 — Curate For Coverage, Not Just Volume

Use quality scoring, retrieval, and policy-based selection to choose what should influence training.

  • remove obvious garbage and low-signal duplication
  • mine failures and intervention-heavy episodes
  • balance by task, site, operator, or hardware revision
  • build reproducible curation ruleset instead of one-off selection

Platform surfaces involved

  • Data Curation Engine
  • Dataset Finalizer
  • Data Explorer

Step 5 — Freeze A Dataset Snapshot

Finalize the selected data into an immutable dataset object with explicit split and manifest policy.

  • freeze exact membership
  • attach annotation and curation lineage
  • validate consistency and completeness
  • export manifests for training and evaluation systems

Platform surfaces involved

  • Dataset Finalizer
  • Unified Data Model
  • Workflow Engine

What matters here

The platform should hand training a governed dataset object, not an ambiguous folder path.

Step 6 — Train And Compare

Launch training from the finalized snapshot and compare candidates with explicit experiment structure.

  • run local, cloud, or hybrid compute
  • track checkpoints, config, and runtime metadata
  • compare runs inside experiments
  • promote only meaningful candidates into the registry

Platform surfaces involved

  • Training Orchestrator
  • Experiment Tracker
  • Model Registry

Step 7 — Evaluate And Release

Before rollout, test the candidate against replay packs, benchmark suites, and promotion policy.

  • compare against baseline
  • apply thresholds for success, latency, and intervention budget
  • require approval where risk warrants it

Platform surfaces involved

  • Evaluation & Release
  • Model Registry
  • Workflow Engine

Step 8 — Deploy, Monitor, And Close The Loop

Package the approved model, stage rollout, monitor the cohort, and feed field evidence back into the next development cycle.

  • build artifact
  • canary rollout
  • monitor telemetry and intervention rate
  • create incidents or maintenance cases when needed
  • promote valuable field evidence into new data and evaluation coverage

Platform surfaces involved

  • Artifact Builder
  • Deployment Manager
  • Update Manager
  • Fleet Management
  • Telemetry & Monitoring
  • Maintenance System
  • Human-in-the-Loop Operations

Early Decisions Worth Making Carefully

Primary ingestion path

Choose the entry point that matches today’s operations, not an imaginary end state.

Annotation schema

Define the task ontology before scaling collection so labeling stays consistent.

Curation policy

Decide what “good data” means in a reproducible way.

Release criteria

Define benchmark and promotion rules early so teams are not inventing standards after the candidate already exists.

Residency and compliance boundary

Decide early whether the deployment needs region-bound storage, cross-border transfer controls, sovereign hosting, or restricted teleoperation and support access, because those decisions affect ingestion, API access, rollout topology, and operations workflows.

What A Good Pilot Looks Like

A strong first implementation proves all of the following:

  • data enters with real provenance
  • teams can review and label it in one place
  • datasets become immutable, explicit objects
  • training is reproducible
  • release decisions use structured evidence
  • rollout and field feedback remain connected to the same lineage chain

That pilot is the best foundation for expanding the rest of the lifecycle into the platform.