Validiti Validiti
Production-ready · 455 tests · VSS-backed
Validiti Atlas

Per-user AI infrastructure. Every answer signed. Every user gets a receipt.

A per-user partition over the VSS-backed Accelerate cache. Every interaction signed by Pacta. Every answer carries a verifiable receipt the user can hand to a regulator. Sealed, queryable, drift-maintained.

We built the category. Nobody else has the substrate to imitate it.

↓ What it does, in plain English

What is Atlas?

Most AI is one giant pool. Everyone shares the same model, the same memory, the same logs. Atlas gives each user their own private corner — signed, sealed, and provable.

Each user, their own AI

Alice is a cardiologist. Bob is an orthopedic surgeon. They both work at the same hospital, both use the same AI tool. Their conversations never cross. Not "we promise" — mathematically can't.

Every answer has a receipt

When the AI answers a question, that answer gets a signed timestamp, a verifiable source trail, and a permanent slot in that user's audit history. Six months later in a malpractice review or a SOX audit: receipts on demand.

The AI gets cheaper with use

If 50 users ask the same question, the AI answers once, then hands the signed answer to the others — instantly, with no extra LLM bill. The substrate becomes the answer.

You own your data trail

Anthropic doesn't hold your audit logs. OpenAI doesn't hold your audit logs. You hold your audit logs, signed under your own master key. Anyone can verify them; nobody can rewrite them.

Atlas runs inside Validiti Accelerate's cache — the layer that already sits between your users and the LLM. Adding Atlas turns that cache from an anonymous shared pool into a per-user verifiable record.

A query, step by step

What happens when Alice the cardiologist asks Atlas a question.

1
Identify Alice
Atlas finds Alice's private slice and loads her rules — what corpora she can read, which verdicts are allowed, her audit history.
2
Look five places at once
Atlas checks five different ways the answer might already exist — recent phrasing, related topics, semantic neighborhoods. The best matches go to the LLM as context.
3
Grade the sources
Every piece of context gets labeled verified, partial, or no source. Alice is a clinician, so she only gets fully-verified info.
4
Talk to the LLM — with guardrails
The LLM writes the answer. Atlas watches in real time; if the model drifts off-source, Atlas nudges it back before Alice sees anything wrong.
5
Sign and file
The final answer gets signed by Alice's slice key and filed in her audit trail. Six months later, anyone with the right key can prove what Atlas said and why.

What slices can do

Ten things, told plainly. Three a slice does on its own; seven it does with other slices, with the LLM provider, or with other Validiti products. Every interaction is signed; every signed interaction lands in the user's audit trail.

On its own · 01

Look five places at once

When the user asks a question, the slice checks five different ways the answer might already exist — recent phrasing, related topics, semantic neighborhoods. The best match goes to the LLM as context.

Example: Alice asks "drug interactions for warfarin?" The slice instantly surfaces her last warfarin question, hospital protocol updates, and FAERS safety signals.

On its own · 02

Catch hallucinations mid-stream

While the LLM is writing, the slice checks each sentence against verified sources. If the model wanders off, the slice nudges it back before the answer reaches the user.

Example: The LLM starts inventing a dosage. The slice spots no matching source, injects the correct dosage from FAERS, and the user never sees the made-up version.

On its own · 03

Grade every source

Each piece of information used to answer gets a label: verified, partial, no source. A clinician's slice can be set to only show fully-verified info; a researcher's can see partial.

Example: A medical slice never serves an unsourced answer. A research slice gets a "partial source" flag instead of an outright refusal.

With others · 04

Hand a cached answer to a peer

If one slice already has a high-quality answer, it can pass that signed answer to another slice — instantly, without re-querying the LLM. Both sides record the handoff.

Example: 50 doctors ask the same drug-interaction question. The first costs an LLM call. The other 49 get the signed answer instantly.

With others · 05

Broadcast corrections

If one slice discovers a fact has gone stale (a drug recall, a retracted citation), it can push that signal to subscriber slices. Their caches invalidate automatically.

Example: The hospital's pharmacy slice notes a new black-box warning. Every clinician's slice that subscribed updates within seconds.

With others · 06

Pass a query to a peer

If a slice's policy is too strict to answer a question, it can hand the question to a different slice that's allowed. Both sides record the handoff.

Example: A clinician's strict slice can't show a research preprint. It hands the question to the research-slice for that user, which can.

Upstream · 07 · EST

Send anonymous trends upstream

If users opt in, slices send aggregate trend data (never their actual queries) to the LLM provider. Privacy is preserved with k-anonymity and noise. Off by default.

Example: Anthropic learns "this hospital network is asking about drug X 40% more this week" — never the patient names, never the queries.

From provider · 08

Capture the LLM's signature

When the LLM provider signs its response, Atlas saves the signature into the user's audit trail. Months later, you can prove exactly what the provider said.

Example: Malpractice review six months later. The clinician can prove Anthropic answered "X" at 2:14 PM on Tuesday — cryptographically.

From provider · 09

Receive provider corrections

The LLM provider can push corrections downstream — "this fact changed, drop your cached answer." Slices subscribed to the provider update automatically.

Example: Anthropic identifies a known model error. Subscribed slices invalidate affected answers and refuse stale results until refreshed.

From other Validiti SKUs · 10

Connect to the rest of the stack

Other Validiti products (ShiftCAPTCHA, DMS, Pacta) can push signed events into a slice's audit trail — "this user passed a CAPTCHA," "this source was reverified," "this event was Pacta-signed."

Example: Your DMS instance flags a citation as retracted. Every slice that depended on it updates, automatically.

Why the privacy story holds up

Most "privacy-preserving AI" pitches stop at policy slides. Atlas's privacy guarantees are shipped in code, not promised in docs. Four pieces.

Mathematical anonymity

When upstream signals leave a hospital, they're aggregated with at least 50 other hospitals first. Statistical noise is added so no single hospital's contribution can be traced. Verifiable from the receipt — no trust required.

Per-user limits, enforced

"Each user can send at most 100 signals per day, 10,000 in their lifetime." Not a guideline — a hard cap, enforced by the substrate. Changing the cap is itself an audit event.

Money trails that match

Every rebate the LLM provider pays is computed by walking the audit trail and signed by your master key. The CFO and the provider's BD look at the same number, signed by you.

Nothing is unverifiable

Every interaction — query, answer, transfer, signal — lands a receipt in the user's audit trail. An auditor can walk it forwards or backwards and get the same answer either way.

455 tests, 38 files, every claim above shipped in production code. The technical implementation lives in the API guide.

The marketplace

When slices can negotiate, real markets emerge. Atlas ships six of them — one to sell answers, one to sell signal, one to gate trades on policy, one to find peers, one to pool data with peers, and one for non-GPU users to do CPU work for GPU-heavy users.

Marketplace · 01

Sell your cached answers

If your slice has answers other people will pay for, post a price. Buyers post a budget. The market matches them, with both sides on record.

Example: Hospital A has cached rare-disease answers. Hospital B asks the same questions. Atlas matches A's offer to B's bid; B gets a faster signed answer, A gets paid.

Marketplace · 02

Let providers bid for your signal

If you opt in to send aggregate trends upstream, multiple LLM providers can compete for it. Atlas routes each signal to the highest payer in real time.

Example: Anthropic offers 5¢ per signal, OpenAI offers 7¢. Atlas routes your signal to OpenAI this week. Next week Anthropic's offer is higher; routing flips automatically.

Marketplace · 03

Trade only with peers you trust

Set rules that every trade must satisfy — "the other side must be at least as careful as we are." Atlas checks the rules at the moment of the trade.

Example: Your hospital won't share with peers whose retention is shorter than 7 years. Atlas refuses any trade that fails the test — automatically, no manual review.

Marketplace · 04

Find peers to trade with

Slices can advertise what they have without revealing who they are. Buyers query "anyone offer cache for this topic?" and get a list of trustworthy candidates.

Example: A research slice asks "anyone with cached cardiology answers?" The catalog returns three signed advertisers; the slice picks the best price + quality.

Marketplace · 05

Pool with other customers

Two organizations can co-sign an agreement to combine their slices' anonymous aggregates — getting better statistical privacy than either could alone.

Example: Two regional hospitals each have 30 patients with a rare condition. Pooled, they have 60 — enough to safely contribute aggregate signal. Neither could do it alone.

Marketplace · 06

Sell CPU cycles back to the network

The flip side of selling cached answers. If your slice has spare CPU, post an offer to do substrate housekeeping — verifying audit chains, batching anonymous signals, validating cache freshness. GPU-heavy users post the work; you do it; you get paid.

Example: A small clinic has spare CPU overnight. A research hospital posts "verify these 10K audit chains, $0.50 each." The clinic's slice picks up the work and gets paid — the research hospital saves CPU for GPU work.

How Atlas compares

The honest read: most of these capabilities don't exist anywhere else. There's no equivalent product because there's no equivalent substrate. Here's the chart.

Capability Validiti
Atlas
RAG
frameworks
LangChain · LlamaIndex
Vector DBs
Pinecone · Weaviate
Provider
memory
Anthropic · OpenAI
Vertical
SaaS
Glean · Harvey
AI
governance
Credo · Holistic
Each user gets their own private slice
namespaces only
Customer holds the audit log, not the vendor
vendor-held

vendor-held
Hand a cached answer to a peer (no LLM re-call)
Push corrections to subscribed peers
Pass a query to a peer with different policy
Send anonymous trends upstream (k-anon + noise shipped)
policy only
Capture and store the LLM's signed response
Receive provider corrections downstream
Connect to other security/audit products
Sell your cached answers to peers
Let LLM providers compete for your signal
Trade only with peers meeting your policy bar
policy mgmt
Find peers with anonymous capability discovery
Pool data with other customers for stronger anonymity
Sell CPU cycles for substrate housekeeping work
Signed, accountable rebate statements
Use any LLM provider
single-vendor

internal only
Storage with built-in integrity checks

Where they win

  • Pinecone, Weaviate — raw retrieval at billion-vector scale. Use them for retrieval; let Atlas govern the slice layer.
  • Glean, Hebbia, Harvey — polished frontends with thousands of customers. Best fit if you need a turnkey enterprise-search or legal-research UI today.
  • Anthropic Memory, OpenAI Custom GPTs — hosted simplicity. No install, no keys, no infrastructure. Pick them if you don't mind the vendor holding everything.
  • LangChain, LlamaIndex — fast prototyping with massive ecosystem. They live above the substrate — you can use both.
  • Credo AI, Holistic AI — governance dashboards. They observe systems Atlas governs internally. Complementary, not competing.

Vertical surface fits

Verticals aren't the product — the substrate is. But the substrate has obvious surface fits, and Atlas ships pre-built role bundles for the four most-asked-about regulatory regimes (HIPAA / SOX / ABA Model Rules / GDPR). Each is a thin wrapper over the same primitives above; verticals are how you land design partners, not what Atlas is.

Healthcare

Doctor-specific clinical assistant

A hospital's LLM-powered tool. Each doctor's queries hit their substrate slice — their patient panel, their specialty corpus, their hospital's protocols. Cross-patient leakage prevented by Brain Key isolation. Each query HIPAA-audited per doctor.

Legal

Attorney-specific research

A law firm's research LLM. Each attorney's substrate carries their cases, their privileged matters, their authorized corpora. Cross-matter contamination structurally impossible. E-discovery defensible: every attorney's research trail is signed and durable.

Multi-tenant SaaS

Per-customer personalization at scale

SaaS company offering an LLM-powered tool to N enterprise customers. Each customer's substrate is isolated, brandable, configurable. Each customer's data never trains anyone else's model. Per-customer audit and provenance compliance.

Financial

Analyst-specific research

Investment firm's research LLM. Each analyst's slice has their coverage universe, their proprietary models, their authorized data. Insider-information firewalls enforced at substrate level. Every recommendation auditable to the source data.

LLM provider integration

Per-user "Memory+" for hyperscaler chat products

An LLM provider plugs Atlas in as the per-user reasoning backbone. Every API user gets durable, signed, provenance-graded memory. Branded as "Anthropic Memory" or similar — Atlas the engine, hyperscaler the surface. Massive scale.

Government / regulated

Cleared-personnel research with corpus ACLs

Each cleared researcher's substrate slice has their authorized clearance level baked into the corpus ACL. Cross-clearance leakage prevented at the substrate, not at the prompt layer. Audit-chain admissible.

Three ways to buy

Same substrate, same guarantees, three shapes. Pick the one that matches your buyer profile.

Shape 01

Atlas SDK

For developers building on the substrate. The caller-facing SDK ships in Python and JavaScript; the negotiation primitives are exposed as claim(), transferCacheTo(), publishDrift(), requestHandoff(). Drop the SDK into your stack, point it at an Atlas daemon, build whatever the substrate makes possible.

Buyer: developers building on top of an Accelerate-running deployment; Atlas is the negotiation client they don't have to write
Free with Accelerate
Python + JS at launch · Go + Rust on roadmap
Shape 02

Atlas per-seat

For SaaS, enterprise, and regulated organizations issuing slices to their employees / customers / members. Each end-user gets a substrate slice; you pay per active slice per month.

Buyer: CTO / VP Engineering at a multi-tenant SaaS, hospital IT, law firm IT, financial-services platform team
$12–48 / slice / month
Volume pricing at >1K slices · pre-built vertical role bundles

How Atlas composes

Atlas isn't a replacement for any other Validiti SKU. It composes with them. A customer running Accelerate + Atlas + EST gets cost reduction + per-user reasoning + privacy-preserving signal — three independent value props on one substrate.

Atlas + the Validiti stack

  • Atlas + Accelerate Flat: per-user RAG with cache-hit cost reduction. The cache is per-user; substrate slices warm up over time. Customer pays less per query AND gets per-user reasoning.
  • Atlas + EST: per-user signal aggregations flow upstream (with the user's opt-in) for LLM-provider feedback loops. The user pays in privacy-preserving signal contribution; the customer rebates against Atlas seat costs.
  • Atlas + Knowledge: the curated brain DBs (medical, legal, scientific, history) become per-user-permission-able. Each user's slice has their authorized brains; cross-vertical leakage impossible.
  • Atlas + Provenance: per-user verdict policy. Medical users get VERIFIED-only. Research users get PARTIAL-flagged. Per-role enforcement at substrate level.
  • Atlas + Drift: per-user subscription to regulatory channels. Different users in the same customer environment can subscribe to different rule sets relevant to their role.
  • Atlas + Audit: per-user audit chains compose with the customer-level chain. End-users can verify their own history; admins verify across users; both are tamper-evident.
  • Atlas + Pacta: all per-user transmissions Pacta-signed. Per-user audit replay over the wire is identical to in-substrate verification.
  • Atlas + Titus: per-user runtime defense. Anomalous behavior in any one user's substrate slice triggers Titus on that slice without affecting others.

None of these compositions is automatic. Customer admins configure each cross-SKU integration explicitly. Atlas's value is highest when paired with at least one corpus product (Knowledge, Provenance, or customer-curated bundles).

What you get on day one

Every install, every SKU shape, every customer.

The short list

  • Each user gets their own private slice. Mathematical isolation. Not row-level — cryptographic.
  • Every interaction lands in their audit trail. Signed, timestamped, hash-chained. Verifiable under your own key, not Validiti's, not the LLM provider's.
  • Privacy is enforceable, not promised. K-anonymity, noise, and per-user budget caps ship in code — auditors can verify them, not just read about them.
  • Rebates are signed and accountable. Walk the audit trail, get the same number every time. CFO and provider BD agree.
  • You can prove what the LLM said. When the provider signs its responses, those signatures save into your audit trail.
  • You can pool with peer customers. Two organizations can co-sign agreements to combine anonymous aggregates — better statistical privacy than either could get alone.
  • You can move your slices. Signed slice bundles export and import across installs. Your users' AI history is theirs to carry, not anyone's to lock up.
  • Use any LLM. Anthropic, OpenAI, Mistral, self-hosted — all supported. New ones drop in.
  • Production-grade storage. Backed by Validiti's VSS substrate — segment-level integrity checks, sealed binaries, retention windows. Concurrency-safe under load.
  • U.S.-headquartered. EU and Asia regional residency in a future phase.

Plus everything in Validiti Titus — runtime defense, network-scale protection. And the cross-cutting Validiti Core Features — Pacta-signed events, sealed binaries, fail-closed privacy enforcement. Always included; not a paid tier.

Want the technical details?

This page is the buyer view. Engineers, auditors, and integrators will want the API guide — protocol types, audit-event ledger, primitive signatures, verifier walk-throughs, settlement math, federation flow.

→ Atlas API guide · → Talk to us

Built-in guarantees

Every Validiti SKU inherits the same Safe · Fast · Smart guarantees from the shared substrate — encryption, tamper-evident history, runtime defense, predictable performance. Same code, same proof, same floor on every install.

See the core features →