A per-user partition over the VSS-backed Accelerate cache. Every interaction signed by Pacta. Every answer carries a verifiable receipt the user can hand to a regulator. Sealed, queryable, drift-maintained.
We built the category. Nobody else has the substrate to imitate it.
Most AI is one giant pool. Everyone shares the same model, the same memory, the same logs. Atlas gives each user their own private corner — signed, sealed, and provable.
Alice is a cardiologist. Bob is an orthopedic surgeon. They both work at the same hospital, both use the same AI tool. Their conversations never cross. Not "we promise" — mathematically can't.
When the AI answers a question, that answer gets a signed timestamp, a verifiable source trail, and a permanent slot in that user's audit history. Six months later in a malpractice review or a SOX audit: receipts on demand.
If 50 users ask the same question, the AI answers once, then hands the signed answer to the others — instantly, with no extra LLM bill. The substrate becomes the answer.
Anthropic doesn't hold your audit logs. OpenAI doesn't hold your audit logs. You hold your audit logs, signed under your own master key. Anyone can verify them; nobody can rewrite them.
Atlas runs inside Validiti Accelerate's cache — the layer that already sits between your users and the LLM. Adding Atlas turns that cache from an anonymous shared pool into a per-user verifiable record.
What happens when Alice the cardiologist asks Atlas a question.
Ten things, told plainly. Three a slice does on its own; seven it does with other slices, with the LLM provider, or with other Validiti products. Every interaction is signed; every signed interaction lands in the user's audit trail.
When the user asks a question, the slice checks five different ways the answer might already exist — recent phrasing, related topics, semantic neighborhoods. The best match goes to the LLM as context.
Example: Alice asks "drug interactions for warfarin?" The slice instantly surfaces her last warfarin question, hospital protocol updates, and FAERS safety signals.
While the LLM is writing, the slice checks each sentence against verified sources. If the model wanders off, the slice nudges it back before the answer reaches the user.
Example: The LLM starts inventing a dosage. The slice spots no matching source, injects the correct dosage from FAERS, and the user never sees the made-up version.
Each piece of information used to answer gets a label: verified, partial, no source. A clinician's slice can be set to only show fully-verified info; a researcher's can see partial.
Example: A medical slice never serves an unsourced answer. A research slice gets a "partial source" flag instead of an outright refusal.
If one slice already has a high-quality answer, it can pass that signed answer to another slice — instantly, without re-querying the LLM. Both sides record the handoff.
Example: 50 doctors ask the same drug-interaction question. The first costs an LLM call. The other 49 get the signed answer instantly.
If one slice discovers a fact has gone stale (a drug recall, a retracted citation), it can push that signal to subscriber slices. Their caches invalidate automatically.
Example: The hospital's pharmacy slice notes a new black-box warning. Every clinician's slice that subscribed updates within seconds.
If a slice's policy is too strict to answer a question, it can hand the question to a different slice that's allowed. Both sides record the handoff.
Example: A clinician's strict slice can't show a research preprint. It hands the question to the research-slice for that user, which can.
If users opt in, slices send aggregate trend data (never their actual queries) to the LLM provider. Privacy is preserved with k-anonymity and noise. Off by default.
Example: Anthropic learns "this hospital network is asking about drug X 40% more this week" — never the patient names, never the queries.
When the LLM provider signs its response, Atlas saves the signature into the user's audit trail. Months later, you can prove exactly what the provider said.
Example: Malpractice review six months later. The clinician can prove Anthropic answered "X" at 2:14 PM on Tuesday — cryptographically.
The LLM provider can push corrections downstream — "this fact changed, drop your cached answer." Slices subscribed to the provider update automatically.
Example: Anthropic identifies a known model error. Subscribed slices invalidate affected answers and refuse stale results until refreshed.
Other Validiti products (ShiftCAPTCHA, DMS, Pacta) can push signed events into a slice's audit trail — "this user passed a CAPTCHA," "this source was reverified," "this event was Pacta-signed."
Example: Your DMS instance flags a citation as retracted. Every slice that depended on it updates, automatically.
Most "privacy-preserving AI" pitches stop at policy slides. Atlas's privacy guarantees are shipped in code, not promised in docs. Four pieces.
When upstream signals leave a hospital, they're aggregated with at least 50 other hospitals first. Statistical noise is added so no single hospital's contribution can be traced. Verifiable from the receipt — no trust required.
"Each user can send at most 100 signals per day, 10,000 in their lifetime." Not a guideline — a hard cap, enforced by the substrate. Changing the cap is itself an audit event.
Every rebate the LLM provider pays is computed by walking the audit trail and signed by your master key. The CFO and the provider's BD look at the same number, signed by you.
Every interaction — query, answer, transfer, signal — lands a receipt in the user's audit trail. An auditor can walk it forwards or backwards and get the same answer either way.
455 tests, 38 files, every claim above shipped in production code. The technical implementation lives in the API guide.
When slices can negotiate, real markets emerge. Atlas ships six of them — one to sell answers, one to sell signal, one to gate trades on policy, one to find peers, one to pool data with peers, and one for non-GPU users to do CPU work for GPU-heavy users.
If your slice has answers other people will pay for, post a price. Buyers post a budget. The market matches them, with both sides on record.
Example: Hospital A has cached rare-disease answers. Hospital B asks the same questions. Atlas matches A's offer to B's bid; B gets a faster signed answer, A gets paid.
If you opt in to send aggregate trends upstream, multiple LLM providers can compete for it. Atlas routes each signal to the highest payer in real time.
Example: Anthropic offers 5¢ per signal, OpenAI offers 7¢. Atlas routes your signal to OpenAI this week. Next week Anthropic's offer is higher; routing flips automatically.
Set rules that every trade must satisfy — "the other side must be at least as careful as we are." Atlas checks the rules at the moment of the trade.
Example: Your hospital won't share with peers whose retention is shorter than 7 years. Atlas refuses any trade that fails the test — automatically, no manual review.
Slices can advertise what they have without revealing who they are. Buyers query "anyone offer cache for this topic?" and get a list of trustworthy candidates.
Example: A research slice asks "anyone with cached cardiology answers?" The catalog returns three signed advertisers; the slice picks the best price + quality.
Two organizations can co-sign an agreement to combine their slices' anonymous aggregates — getting better statistical privacy than either could alone.
Example: Two regional hospitals each have 30 patients with a rare condition. Pooled, they have 60 — enough to safely contribute aggregate signal. Neither could do it alone.
The flip side of selling cached answers. If your slice has spare CPU, post an offer to do substrate housekeeping — verifying audit chains, batching anonymous signals, validating cache freshness. GPU-heavy users post the work; you do it; you get paid.
Example: A small clinic has spare CPU overnight. A research hospital posts "verify these 10K audit chains, $0.50 each." The clinic's slice picks up the work and gets paid — the research hospital saves CPU for GPU work.
The honest read: most of these capabilities don't exist anywhere else. There's no equivalent product because there's no equivalent substrate. Here's the chart.
| Capability | Validiti Atlas |
RAG frameworks LangChain · LlamaIndex |
Vector DBs Pinecone · Weaviate |
Provider memory Anthropic · OpenAI |
Vertical SaaS Glean · Harvey |
AI governance Credo · Holistic |
|---|---|---|---|---|---|---|
| Each user gets their own private slice | ✓ | ✗ | ◐ namespaces only |
✗ | ✗ | ✗ |
| Customer holds the audit log, not the vendor | ✓ | ✗ | ✗ | ✗ | ◐ vendor-held |
◐ vendor-held |
| Hand a cached answer to a peer (no LLM re-call) | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Push corrections to subscribed peers | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Pass a query to a peer with different policy | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Send anonymous trends upstream (k-anon + noise shipped) | ✓ | ✗ | ✗ | ✗ | ✗ | ◐ policy only |
| Capture and store the LLM's signed response | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Receive provider corrections downstream | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Connect to other security/audit products | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Sell your cached answers to peers | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Let LLM providers compete for your signal | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Trade only with peers meeting your policy bar | ✓ | ✗ | ✗ | ✗ | ✗ | ◐ policy mgmt |
| Find peers with anonymous capability discovery | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Pool data with other customers for stronger anonymity | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Sell CPU cycles for substrate housekeeping work | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Signed, accountable rebate statements | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
| Use any LLM provider | ✓ | ✓ | ✓ | ✗ single-vendor |
◐ internal only |
✓ |
| Storage with built-in integrity checks | ✓ | ✗ | ✗ | ✗ | ✗ | ✗ |
Verticals aren't the product — the substrate is. But the substrate has obvious surface fits, and Atlas ships pre-built role bundles for the four most-asked-about regulatory regimes (HIPAA / SOX / ABA Model Rules / GDPR). Each is a thin wrapper over the same primitives above; verticals are how you land design partners, not what Atlas is.
A hospital's LLM-powered tool. Each doctor's queries hit their substrate slice — their patient panel, their specialty corpus, their hospital's protocols. Cross-patient leakage prevented by Brain Key isolation. Each query HIPAA-audited per doctor.
A law firm's research LLM. Each attorney's substrate carries their cases, their privileged matters, their authorized corpora. Cross-matter contamination structurally impossible. E-discovery defensible: every attorney's research trail is signed and durable.
SaaS company offering an LLM-powered tool to N enterprise customers. Each customer's substrate is isolated, brandable, configurable. Each customer's data never trains anyone else's model. Per-customer audit and provenance compliance.
Investment firm's research LLM. Each analyst's slice has their coverage universe, their proprietary models, their authorized data. Insider-information firewalls enforced at substrate level. Every recommendation auditable to the source data.
An LLM provider plugs Atlas in as the per-user reasoning backbone. Every API user gets durable, signed, provenance-graded memory. Branded as "Anthropic Memory" or similar — Atlas the engine, hyperscaler the surface. Massive scale.
Each cleared researcher's substrate slice has their authorized clearance level baked into the corpus ACL. Cross-clearance leakage prevented at the substrate, not at the prompt layer. Audit-chain admissible.
Same substrate, same guarantees, three shapes. Pick the one that matches your buyer profile.
For developers building on the substrate. The caller-facing SDK ships in Python and JavaScript; the negotiation primitives are exposed as claim(), transferCacheTo(), publishDrift(), requestHandoff(). Drop the SDK into your stack, point it at an Atlas daemon, build whatever the substrate makes possible.
For SaaS, enterprise, and regulated organizations issuing slices to their employees / customers / members. Each end-user gets a substrate slice; you pay per active slice per month.
For LLM providers wiring Atlas's negotiation primitives into their own surface. Branded by the provider; co-developed compliance posture; scales with user count. The category-defining sibling product to Accelerate.
Atlas isn't a replacement for any other Validiti SKU. It composes with them. A customer running Accelerate + Atlas + EST gets cost reduction + per-user reasoning + privacy-preserving signal — three independent value props on one substrate.
VERIFIED-only. Research users get PARTIAL-flagged. Per-role enforcement at substrate level.None of these compositions is automatic. Customer admins configure each cross-SKU integration explicitly. Atlas's value is highest when paired with at least one corpus product (Knowledge, Provenance, or customer-curated bundles).
Every install, every SKU shape, every customer.
Plus everything in Validiti Titus — runtime defense, network-scale protection. And the cross-cutting Validiti Core Features — Pacta-signed events, sealed binaries, fail-closed privacy enforcement. Always included; not a paid tier.
This page is the buyer view. Engineers, auditors, and integrators will want the API guide — protocol types, audit-event ledger, primitive signatures, verifier walk-throughs, settlement math, federation flow.
Every Validiti SKU inherits the same Safe · Fast · Smart guarantees from the shared substrate — encryption, tamper-evident history, runtime defense, predictable performance. Same code, same proof, same floor on every install.