Brixo Newsletter: When Your Chatbot Costs More Than Your Car
Why most teams are stuck between "we should use AI" and "holy hell, what's this bill?" + some exciting updates about Brixo's upcoming updates :)
Thank you so much for taking time to read the second edition of the Brixo Weekly Newsletter. We hope there’s some tidbits that get your brain turning like they did for us.
Brixo Update
We’re poised to share details on a product release next week. We’ll be sharing our first tool in the optimization toolkit and some exciting new visualizations for monitoring your LLM performance - so stay tuned.
A good preview is Mike’s latest post:
Why LLM Routers Are the "Clay" of AI
If you're in sales or marketing, you probably know Clay. It's the platform that turned the nightmare of juggling multiple data providers into something actually manageable. Now imagine that same concept, but for AI models instead of data sources.
BTW - we now have a Discord community where we can talk “shop” directly and share more often. Join now to achieve the irrevocable clout as an early member.
What stood out
In our conversations from this week, we have noticed a significant gap between the companies that have plans to bring LLMs into their product and ones that are actively provisioning LLMs in their products.
And the most common feedback between the two parties:
We plan to use LLMs:
What is the right model for our use cases? What if a new one comes out tomorrow? Do we need to rebuild?
What are the surprise expenses going to be?
How will we know it’s working and worth the investment?
We’re actively building with LLMs
Which features are driving the most usage and creating the most cost for us?
Are the customers happy with the outputs?
How do we go from “Wild West” to predictability in our usage and costs?
If your asking yourselves these questions, let us know. We believe Brixo will answer these questions.
What we read/watched/listened
These 2 posts are making the rounds and clearly signally skepticism in the hype:
This is the critical detail that could unravel the AI trade: Nobody is paying for it.
MIT report: 95% of generative AI pilots at companies are failing
—
Well known VC’s Bill Gurley & Brad Gerstner on the Compute Arms Race. The BG2 pod is a great one to follow. Listen Now
—
Replit’s Matt Palmer on their speedrun to $100M ARR Listen Now
Final Brick
The average U.S. home uses about 10,500 kWh per year—roughly 28 kWh per day.
By comparison, running 700 million GPT-4o queries per day burns through electricity on par with 35,000 homes annually.
And GPT-5 takes it further. At ~18 Wh per query, more than 50 times GPT-4o's appetite, 2.5 billion daily queries would consume about 45,000 MWh each day, enough to power 1.5 million homes.
The ceiling on LLM growth isn't code, it's the grid*. *
At least here in the US.