Newsletter: What Happens When AI Outgrows Its Observability Stack
As agents scale, so does the chaos underneath. We disect the rising tension between data and visibility, why business teams are left in the dark, and where the biggest names in AI are betting.
Brixo Latest
AI agents generate massive amounts of data every time they run — commonly referred to as telemetry.
This includes things like traces, tool calls, retries, and token usage.
The challenge? Telemetry is messy.
As companies scale their agent footprint, the mess grows exponentially.
Most observability tools today focus on helping engineers debug errors.
But there’s one critical stakeholder left behind: the non-technical side of the business.
Telemetry is technical and complex, so engineering teams are forced to translate it for their business counterparts.
You can probably imagine the tension:
Engineering wants to focus on building — not creating ad hoc reports.
Meanwhile, the business team (product, finance, analysts, execs) is stuck in the dark.
They can’t troubleshoot customer issues.
Finance can’t predict cost patterns.
Product teams are stuck in Jira and PRDs, playing telephone across functions.
It’s like the early days of websites — when only engineering could make changes, and marketing had to open a ticket and wait.
That’s why we built Brixo.
We turn raw LLM telemetry into actionable metrics, reports, and workflows — no translation required.
Our mission is simple:
Help business teams understand their agents — so they can deliver better customer experiences.
Want to see how it works? Drop us a note @ contact@brixo.com
What stood out
The big dogs are getting together to codify data best practices. Not surprising — first-party data is still the biggest bottleneck to enterprise AI success.
The $1 trillion AI problem: Why Snowflake, Tableau and BlackRock are giving away their data secrets
“We’re not in the business of locking data in, we’re in the business of making it accessible and valuable,” Christian Kleinerman, Snowflake’s executive vice president of product, told VentureBeat in an exclusive interview. “The biggest barrier our customers face when it comes to ROI from AI isn’t a competitor — it’s data fragmentation.”
What we read/listened/watched
A security perspective on the explosion of agents inside enterprises. You can imagine SecOps and IT pulling their hair out.
Companies are sleepwalking into agentic AI sprawl
The big news from the AI giant world is the new OpenAI & NVIDIA partnership. The next few links are worth a quick read and a listen. The NVIDIA episode on BG2 gives insight into Jensen’s perspective on the future.
Sam Altman’s Blog: Abundant Intelligence
OpenAI and NVIDIA announce strategic partnership to deploy 10 gigawatts of NVIDIA systems
Final Brick
If you’ve been keeping tabs on Sam Altman’s latest moonshots, he’s been talking about producing 1 GW per week of AI infrastructure.
Our vision is simple: we want to create a factory that can produce a gigawatt of new AI infrastructure every week. - Sam Altman
So, how much is 1 GW per day?
1 gigawatt (GW) is 1 billion watts of power.
That’s roughly the output of a nuclear power plant.
It’s enough electricity to power 750,000 homes or keep San Francisco running for 4 days straight.
So when Sam says “1 GW per week,” he’s talking about building enough power for a city—every 7 days.
Here’s the math:
1 GW for 1 day = 24 GWh → powers SF for ~4 days
7 GW (1 week’s worth) = 168 GWh → powers SF for a month
10 GW total = powers SF for nearly 2 months
Or put differently: it’s enough to power 10 million homes
Except none of that power is going to homes. It’s going to data centers running AI models 24/7.
Sam rationalizes this massive goal:
Maybe with 10 gigawatts of compute, AI can figure out how to cure cancer. Or with 10 gigawatts of compute, AI can figure out how to provide customized tutoring to every student on earth. If we are limited by compute, we’ll have to choose which one to prioritize; no one wants to make that choice, so let’s go build.
Impressive mission.