Newsletter: Experience Analytics & Silicon Valley Drama
Get our your popcorn
Welcome to the latest edition of the Masonry Newsletter!
Brixo Latest
AI products have no predefined paths. No buttons to tag. No fixed workflows to map.
Customers arrive with intent. They translate that intent into a prompt. That prompt launches a journey and every customer journey is different.
One customer may write a detailed request and gets their answer in two turns. Another writes something vague and struggles through twenty prompts trying to figure out what they mean.
Today, there’s no way to measure that customer experience and identify ways to improve it.
Typical product analytics tools can’t see the conversation or the context, which leaves product builder blind or forcing them to dig through technical trace logs.
We see this as inflection point similar to when event analytics disrupted web analytics for applications.
The new AI modality requires a new toolbox: Experience Analytics.
What Stood Out
There’s a lot of FUD about the future of software moats and the ease of copying and vibecoding your own solutions.
That might work for startups but the maintenance burden at scale is scary. The Build vs Buy opportunity cost debate will continue.
What We Read/Listened/Watched
Langfuse Acquired by Clickhouse
Langfuse, an open-source LLM engineering platform that helps teams build, monitor, debug, and improve production LLM applications, was acquired by Clickhouse, which is an early consolidation play to bring a full stack experience to AI builders.
Demystifying evals for AI agents
Anthropic publishes great education content and this is another one. We believe Evals are critical for ensuring the engine works before the car leaves the production lot, but once it’s out on the road, teams need experience analytics.
In software, the code documents the app. In AI, the traces do.
Harrison Chase, Co-founder & CEO of LangChain, shares his point of view on the critical differences between traditional software and AI agents, which we believe is the purpose of experience analytics:
In traditional software, product analytics is separate from debugging. Mixpanel tells you what users clicked. Your error logs tell you what broke. They're different tools for different questions.
In AI agents, these merge. You can't understand user behavior without understanding agent behavior. When you see "30% of users are frustrated" in your analytics, you need to open traces to see what the agent did wrong. When you see "users asking for data analysis features", you need to look at traces to see which tools the agent is already choosing and what's working. The user experience is the agent's decisions, and those decisions are documented in traces - so product analytics has to be built on traces.
[Source: Harrison Chase article on x]Final Brick
This past week was a firehouse of Silicon Valley drama with OpenAI square in the middle. It reminded me of the “Valleywag” heyday. If you remember, drop a comment.
The Thinking Machines drama appears to be a cautionary tale for taking on a $2B seed round.
The main course is the Musk v OpenAI lawsuit, which has all the makings of a future Netflix series.
John Coogan and Jordy Hayes do a great breakdown where they make the case for either side in Friday’s TBPN episode. I recommend if you’d like to learn the general field of play for this case.
Naturally, with these Elon and OpenAI in a battle, we’re seeing them take to public forums to set the public record straight while the case is ongoing.




