Validiti Validiti
Validiti CSI

Sensory memory for machines.

Not artificial intelligence. The layer between a machine's senses and its brain that makes signals legible. Powered by the same Trinity Cortex succession engine that drives Validiti's language work, adapted for sensor streams. 0.2 ms sample lookups on a $5 microcontroller.

0.2 ms
Sample lookup
8 MB
Per channel
$5
Hardware (chip)
0
GPU required
↓ How it works

What it does

Four steps. The substrate watches; patterns emerge; the prediction is instant.

Step 1
Sensors read
Any source: IMU, encoder, force-torque, camera frames, microphone, LIDAR, environmental probes.
Step 2
CSI watches
Records every sample per channel. Quantizes into state sequences at the wire layer.
Step 3
Patterns emerge
"Given this signal state, what state typically comes next? What's anomalous? What's normal-for-this-device?"
Step 4
Instant classification
Pattern lookup on the next sample in 0.2 ms. No GPU. No cloud.

Think of it like your ears recognizing a familiar voice in a crowd. You don't sample-and-decode each phoneme. The pattern is the recognition. That's what CSI gives to machines — and paired with Validiti MCI, it closes the loop: sensors that recognize, motors that respond, both on the same substrate.

Why pattern memory beats inference at the sensor layer

Most sensor stacks summarise, discard, and forget. CSI lives at the layer underneath — where sample rate, retention depth, footprint, and what the substrate remembers about each sensor decide whether a machine can tell normal from anomalous.

Property TinyML inference Edge AI pipeline Validiti CSI
Sample latency1–5 ms0.5–2 ms0.2 ms
Sample-rate breadth10–100 Hz typical100–500 Hz25 Hz – 5 kHz on the same engine
Hardware floor$50 dev boardIndustrial PC + accelerator$5 microcontroller
Power draw100 mW – 1 W5–30 W~5 mW (battery weeks)
AdaptationRetrain on a desktopHand-tune filtersContinuous observation
Pattern depthShort window (8–32 samples)None — per-frame inferenceMulti-depth succession tables (2–10+ samples)
Data retentionDiscarded after inferenceDiscarded each cycleEvery sample, indexed per channel per device
Per-sensor quirksSame model everywhereGeneric plant assumptionEach sensor learns its own bias + noise floor
Offline operationCloud retraining loopYesYes — on-device entirely

What “more data, more deeply” means at the sensor layer

  • Long-range signal patterns survive. The vibration signature that predicts bearing failure isn't a single spike — it's the multi-second envelope of dozens of samples shifting in correlation. CSI's depth tables capture the whole signature. A short-window TinyML model forgets the earlier half before the spike arrives.
  • Rare anomalies stay in memory. The substrate doesn't sample-and-discard. Every reading enters the chain. The once-an-hour glitch the line operator only sees on the third shift — CSI saw it, recorded the succession, and can recognise it the next time it happens. A training-set-sampled detector doesn't even know it happened.
  • Per-sensor drift accumulates. Each sensor's succession tables grow with that sensor's history. After six months, two identical accelerometers have meaningfully different baselines because they shipped from different factory batches and lived on different machines. A centrally-trained model treats them as interchangeable forever.
  • Same engine across the rate range. A 5 kHz force-torque sensor and a 25 Hz environmental probe use the same substrate primitives — the depth tables, the per-sensor chains, the audit posture. Fleets that mix channel rates don't have to maintain separate inference stacks.

Watch the rate gap, live

Three sensor pipelines, same wall-clock, running at their real rates. The numbers reset every three seconds — watch what each stack kept while it was running.

TinyML inference
100 samples/sec
0
Last 16 samples retained · rest discarded
Edge AI pipeline
500 samples/sec
0
0 retained · discarded each frame
Validiti CSI
5,000 samples/sec
0
0 samples indexed per channel · every one

In the same three-second window, CSI captures 15,000 sample snapshots while TinyML gets to 300 — and only CSI keeps all of them.

→ See what this looks like on Optimus & Unitree

What it unlocks

Per-channel pattern memory, on-device, microsecond-class. Markets that have been waiting for a sensing layer that doesn't summarise everything to nothing the moment it arrives.

Medical sensing
$48B
ECG, EEG, EMG, pulse-oximetry, continuous glucose. Patient-specific pattern memory at the body-worn microcontroller. The wearable gets better at being theirs over weeks — no cloud round-trip, no shared model.
Automotive sensing
$54B
LIDAR / radar / IMU fusion at the per-sensor level. Each device learns its own bias and noise floor. Failure modes show up as drift in a per-device chain — not as anomalies in a fleet-wide aggregate.
Environmental + IoT
$18B
Air quality, water quality, soil moisture, structural health. Battery-powered sensors that run for weeks and recognise local-normal vs unusual without phoning home.
Robotics · closes the loop with MCI
$67B
Where MCI ships the motor decisions, CSI ships the sensor recognitions. Same substrate, same chains. Force-torque, encoders, vision frames feed CSI; pattern lookups feed back into MCI's succession tables. Closed-loop intelligence on $5 silicon.
Acoustic + speech sensing
$12B
Microphone-array pattern recognition for industrial fault detection, voice-event recognition, gunshot triangulation, wildlife tracking. Sub-millisecond classification on commodity microcontrollers.

Powered by Trinity Cortex

The same succession engine that drives Validiti's language work — adapted from words to sensor samples. Same primitives that power MCI on the motor side.

Pattern emergence at the wire layer

  • Sensor samples as state sequences. Continuous readings get quantised into discrete states per channel. The substrate watches the sequences and builds successor probability tables — the same shape it builds over words and tokens for the LLM cache, and over joint states for MCI.
  • Per-sensor chains. Each sensor channel has its own succession table, signed and verified per device. No central training infrastructure; no model that has to be re-deployed when one channel's bias drifts.
  • Same provenance posture. Every classification carries a chain back to the observed samples that produced it — the same audit shape every Validiti SKU ships.
  • Substrate, not framework. CSI sits below ROS, below the driver layer, below the SDK. Pattern lookups are emitted by the substrate; application code calls into it the way it calls into a kernel driver.

Pricing

Pick the sample rate your sensors need. Pay per channel at that tier's published rate. Apex caps any single device at $2,499/month. Fleet handles anything past 1,000 samples/sec/channel or large multi-sensor deployments. Same price points as MCI — bundle them together to close the loop.

Introductory pricing. Early-adopter rates — locked in for the seats you take now.

Spark

$0
Hobby and student builds. Slow-rate sensing.
  • 25 samples/sec/channel
  • Unlimited channel count
  • Single device
  • Community support
  • No credit card

Hobby builds, classroom kits, environmental probes at slow rate, single-sensor learning rigs.

Reflex

$8 /channel /month
IMU sampling, basic condition monitoring, environmental sensing.
  • 25–150 samples/sec/channel
  • Unlimited channel count
  • Per-channel metering
  • Email support
  • API access

IMU sampling, encoder feedback at low rate, basic vibration monitoring, air-quality and water-quality probes, agricultural IoT.

Synapse

$24 /channel /month
Audio, vibration analysis, motor encoder feedback, wearable medical.
  • 150–500 samples/sec/channel
  • Unlimited channel count
  • Per-channel metering
  • Email support
  • API + audit endpoints

Audio + microphone-array pattern recognition, vibration spectrum analysis, motor encoders at full rate, wearable ECG / EEG / EMG.

Fleet

Custom /negotiated
LIDAR + radar fusion, multi-channel medical, autonomous-vehicle sensing.
  • 1,000+ samples/sec/channel
  • Negotiated annual contract
  • Dedicated engineering
  • Multi-sensor coordination
  • Custom support tier

LIDAR / radar / IMU fusion at autonomous-vehicle rates, multi-channel medical imaging, defense ISR sensor arrays, OEM-embedded sensor suites at scale.

Contact for fleet pricing →

A channel is one independently-sampled sensor stream — a 3-axis IMU is 3 channels; a 16-electrode EEG is 16 channels; a single force-torque cell with 6 axes is 6 channels. Tier brackets are the supported sample rate per channel. Apex's $2,499/device monthly cap means a 50-channel fusion rig at the top of the Apex band pays the same as a 60-channel one. Anything above 1,000 samples/sec/channel, or fleet-scale multi-sensor deployments, is Fleet: contact@validiti.com for negotiated annual.

Built-in guarantees

Every Validiti SKU inherits the same Safe · Fast · Smart guarantees from the shared substrate — encryption, tamper-evident history, runtime defense, predictable performance. Same code, same proof, same floor on every install.

See the core features →