rFabric

Layer 1: Data Foundation

Annotation & Labeling

Annotation & Labeling turns reviewed robot data into structured semantic signal for learning, evaluation, and operational analysis. It is designed to keep labels attached to source data, schema, review state, and later dataset versions so annotations never become an isolated side system.

What This Surface Owns

This surface owns structured human and machine labeling across the data foundation.

  • Episode-level, phase-level, frame-level, and trajectory-level labels.
  • Annotation schema management and enforcement.
  • Multi-annotator workflows, review queues, and quality control.
  • Integration of human labels, model-assisted proposals, and programmatic annotations into the same entity model.

The goal is not only to label faster. It is to label in a way that remains reusable, reviewable, and lifecycle-safe.

Annotation Modalities

Episode-level labels

  • success and failure
  • quality tier
  • task variant
  • environment notes
  • operator or collection flags

Temporal annotations

  • task phases
  • failure windows
  • recovery segments
  • intervention periods
  • event-aligned markers on synchronized timelines

Frame and spatial annotations

  • 2D boxes
  • 3D boxes
  • segmentation masks
  • grasp points
  • contact points
  • target regions

Language and semantic labels

The platform should support whichever annotation type best reflects the learning problem, not force all tasks into one visual ontology.

  • instruction labels
  • task descriptions
  • semantic tags
  • multi-language or synonym-aware label layers when needed

Schema And Ontology Management

Annotation quality depends on schema discipline.

Schema definition

  • Define allowed labels, enums, relationships, and required fields per task or program.
  • Separate global taxonomy from task-specific annotation rules.
  • Support evolution of schemas without detaching older annotated data.

Validation

  • Prevent invalid labels, missing required fields, and incompatible annotation shapes.
  • Enforce consistency before annotations are approved for downstream use.
  • Keep schema version attached to every annotation object.

Migration and compatibility

  • Support schema revisions that map older labels to new structures when appropriate.
  • Make schema differences visible in review and dataset finalization rather than burying them inside export scripts.

Assisted Labeling

Model-assisted proposals

  • Seed boxes, segments, phase boundaries, or event markers from detectors, heuristics, or learned proposal systems.
  • Preserve proposal provenance so teams know what was machine-suggested versus human-authored.
  • Use proposals to reduce blank-slate annotation work rather than to bypass review.

Interpolation and tracking

  • Keyframe-based propagation across dense temporal data.
  • Object or trajectory tracking to avoid frame-by-frame re-labeling.
  • Useful for high-frame-rate robot video and repeated motion patterns.

Programmatic annotation

  • Ingest labels from vendor systems or internal automation through the Platform API.
  • Attach automated reward-model outputs, heuristics, or external labeling service results to the same source entities.
  • Keep machine-generated annotations inside the same review and versioning system as manual ones.

Review And Quality Control

Multi-annotator workflows

  • Assign the same episode to multiple labelers when agreement matters.
  • Track disagreement and escalate conflict cases to reviewers.
  • Measure annotation quality using approval state and inter-annotator agreement where relevant.

Review queues

  • Route by task, modality, site, operator, model proposal source, or customer scope.
  • Support staged review such as draft → reviewer approved → production ready.
  • Make quality assurance explicit rather than dependent on informal spot checks.

Annotation completeness

  • Track which required labels exist for each dataset purpose.
  • Surface incompleteness to Dataset Finalizer and Workflow Engine gates.
  • Prevent training-ready datasets from being built on partially labeled assumptions unless explicitly allowed.

Versioning And Lineage

Labels need the same lifecycle rigor as datasets and models.

  • An annotation remains linked to the exact source session, episode, frame range, and schema version it refers to.
  • Reprocessing, re-cutting, or re-indexing data should not silently detach labels from source context.
  • Manual corrections, review decisions, and approval state remain visible across dataset versions.
  • Finalized datasets carry the annotation lineage they depended on.

This is what keeps labeling from becoming a disconnected sidecar database.

Relationship To Other Surfaces

Upstream

  • **Data Explorer** provides the synchronized review environment.
  • **Data Processing Pipeline** provides canonical episodes, frame indices, and alignment context.

Downstream

  • **Data Curation Engine** uses label completeness, class balance, phase tags, and quality signals.
  • **Dataset Finalizer** gates snapshot readiness on required annotation coverage.
  • **Evaluation & Release** can promote labeled failure windows and scenario tags into replay and benchmark packs.

Why This Matters Architecturally

Annotation is not useful simply because labels exist. It is useful when labels stay connected to:

  • source data
  • schema
  • review state
  • curation ruleset
  • dataset version
  • model and evaluation outcomes later on

That connection is what lets teams ask whether a model regression came from missing labels, wrong labels, incomplete ontology coverage, or outdated annotation schema.

Why Teams Care

Label quality

Schema enforcement, review queues, and provenance reduce noisy supervision.

Speed

Assisted labeling and interpolation improve throughput without sacrificing auditability.

Reusability

Labels remain attached to source data and dataset versions instead of getting lost in export pipelines.

Lifecycle fit

Annotation feeds curation, dataset finalization, evaluation, and retraining through one connected system.