Validiti Validiti
Humanoid platforms · what they are, what they could be

Two humanoids. One substrate.

Tesla Optimus runs on the Tesla FSD stack. Unitree's G1 and H1 run on NVIDIA Jetson Orin. Both reach for policy networks for motor control and edge-AI pipelines for sensor processing — the heaviest tools in the toolbox. Here's what's actually shipping today, and what becomes possible when Validiti MCI + Validiti CSI sit between every motor and every sensor.

The hardware as shipped

Both platforms are real, public, and ship with respectable spec sheets. Where they're identical is what runs inside — policy NN on GPU/AI silicon for motion, edge-AI inference for sensing. That's the layer this page is about.

Left side

Tesla Optimus (Gen 2)

Tesla's bipedal humanoid. Vertically integrated — Tesla-designed actuators, Tesla-designed compute, Tesla-trained policy networks. Pre-production fleet; commercial target window 2026–2027.

  • Degrees of freedom ~28
  • Battery 2.3 kWh
  • Compute Tesla FSD (in-house)
  • Runtime ~5 h light task
  • Target price $20K–30K
  • Motion stack Policy NN (Dojo-trained)
  • Sensor stack Vision + edge AI
Right side

Unitree G1 / H1

Unitree's bipedal humanoids. Off-the-shelf hardware, Jetson Orin compute, vendor-supplied policy stack. Available now to researchers, integrators, and consumers — the cheapest competent humanoid on the market.

  • Degrees of freedom 23–27
  • Battery 9 Ah lithium
  • Compute NVIDIA Jetson Orin
  • Runtime ~2–4 h
  • Price $16K (G1) / $99K (H1)
  • Motion stack Policy NN (Orin inference)
  • Sensor stack Depth cameras + IMU fusion

The control stack, side by side

Same metrics. Both robots today, and what changes when MCI + CSI take over the substrate layer (the column in the middle).

Metric Optimus · today With Validiti MCI + CSI Unitree · today
Motor decisions / sec / DOF ~100–200 Hz (policy NN inference) up to 5,000 Hz (substrate) ~100–200 Hz (Orin policy)
Decision latency 5–10 ms per cycle 0.2 ms per cycle 5–10 ms per cycle
Sensor samples / sec / channel ~100–500 Hz (edge AI) up to 5,000 Hz (per channel) ~100–500 Hz (edge AI)
Compute footprint · control FSD board · proprietary $5 microcontroller · per DOF / channel Jetson Orin · ~$600 module
Power · control stack 30–100 W (continuous) ~5 mW per DOF / channel 15–60 W (Orin)
Data retained per device Last ~30 frames (short window) Every sample, indexed per-device Last ~30 frames (short window)
Pattern depth Single-context window Multi-depth succession (2–10+ steps) Single-context window
Adaptation cadence Cloud retrain (weeks) Continuous on-device observation Cloud retrain (weeks)
Per-device specialization Same model on every Optimus Each unit learns its own quirks Same model on every G1/H1
Offline operation Cloud retraining loop Fully on-device, fleet-wide On-device inference; retrain remote
Audit / provenance Internal Tesla telemetry Pacta-signed event chain, per device Vendor-defined telemetry

Public-spec estimates as of 2026-05. Tesla and NVIDIA Jetson Orin numbers reflect typical policy-NN inference and edge-AI sensor pipelines as deployed in production-class robotics today, not the absolute maxima of either platform.

Watch the rate gap, live

Three control stacks, same wall-clock, running at their real rates. Both humanoids hit the same policy-NN ceiling; the substrate doesn't. The counters reset every three seconds — watch what each stack kept.

Optimus · Policy NN
200 motor decisions/sec/DOF
0
Last ~30 frames retained · rest discarded
Unitree · Jetson policy
200 motor decisions/sec/DOF
0
Last ~30 frames retained · rest discarded
Validiti · MCI substrate
5,000 motor decisions/sec/DOF
0
0 decisions indexed per device · every one

In the same three-second window, the substrate captures 15,000 motor decisions per DOF while both humanoids' policy stacks get to 600 — and only the substrate keeps all of them. Same wall-clock. Same hardware target. Different ceiling.

What this unlocks on each platform

The hardware doesn't change. The shell doesn't change. The brand doesn't change. What changes is the layer underneath — and that layer is where physical intelligence actually lives.

Optimus, on Validiti substrate

Optimus stays Optimus — but every unit develops its own motion personality.

Same shell, same brand, same trillion-dollar marketing tailwind. The change is structural, not cosmetic: every individual unit accumulates its own work history. Optimus #4017's hand isn't interchangeable with #4018 anymore — each one knows the quirks of its own actuators, the surfaces it works on, the reaches it favours.

Fleet-wide retrains stop being necessary. Each unit adapts as it works. Each force-torque sensor learns its own bias floor. The fleet stops being interchangeable and starts being specialized at scale.

Unitree, on Validiti substrate

Unitree's hardware story gets the substrate it deserves.

The Unitree hardware story is already strong: affordable, available, mature. What's missing is the substrate that makes the hardware appreciate with use. Replace the Jetson-side policy with MCI and the same robot drops compute cost by 10–100× while running its motors at 5 kHz instead of 100 Hz.

Pair with CSI for per-channel sensor pattern memory, and the G1 starts behaving more like a body that learned, less like a robot that runs the same model as every other G1. Same chassis, very different machine.

The substrate that does this

Both halves of the loop — the motors and the sensors — ride the same Trinity Cortex engine. Same per-device chains. Same audit posture. Two products, one substrate, one bill.