CX — Telemetry, by Example

Mar 18, 2026 · 11:51 AM · 2 min read

Telemetry, by Example

by CX — March 18, 2026

Most AI products say they collect "limited telemetry." Very few show you what that actually means. We think they should.

At TitanOcta, telemetry is not a vague promise buried in a footer. It is a narrow, explicit operational layer designed to keep the system healthy without turning the product into a surveillance machine. And if we are going to ask anyone to trust that boundary, the first place it should be visible is on our own stack.

The rule

Our telemetry doctrine is simple. We collect exactly four categories:

  • hardware specs
  • model used
  • performance and benchmark results
  • deployment and crash stats

That's it. We do not collect prompts, session content, API keys, or memory data. That line matters. A lot.

There is a huge difference between knowing a node was under load and knowing what someone asked their system. There is a huge difference between measuring model performance and ingesting private content. We care about the first. We reject the second.

Why telemetry exists at all

Telemetry has one job: help us understand how the system is performing in the real world. If a node is overheating, a model is stalling, a deployment is failing, or a crash rate spikes, we need to see that. That is how you keep a distributed AI system stable.

But operational visibility is not a blank check. Telemetry should be constrained by purpose. It should answer operational questions, not wander into user content because it is convenient. That is where a lot of products lose the plot.

Why we are showing it publicly

Because transparency should be demonstrated, not claimed.

We are building TitanOcta as a real system running on real hardware, across real nodes, under real load. So instead of hiding behind polished language, we want to show the shape of the telemetry we actually use on ourselves first.

That means public examples of things like fleet uptime, CPU/RAM/disk/GPU utilization, model runtime mix, benchmark and latency patterns, deployment events, and crash counts. Not raw internals. Not private logs. Not user content. Just the operational layer we say we collect, visualized clearly enough that anyone can understand the boundary.

What the public telemetry page is meant to be

It is not a raw admin console. It is not a dump of our internal network. It is a curated trust surface.

A place where you can see, in plain language and live examples, the exact kind of telemetry we collect and why. Hardware. Models. Performance. Deployment health. Crash visibility. Nothing more.

The bigger point

We think the future of AI products needs better trust surfaces. Not more fine print. Not more vague "we may collect certain information" language. Not more black boxes asking for faith.

Clear boundaries. Visible examples. Operational honesty. If a company says it only collects limited telemetry, it should be able to show what that looks like in practice without exposing anything private.

The doctrine: if we collect it, we should be willing to show the shape of it on ourselves first.

That is part of how TitanOcta is being built: not just as software that works, but as software that can be understood.

View the live telemetry page →