The app for independent voices

It's amazing that this content is free. Thank you both so much. (And everybody else who helps produce these podcasts.)

I believe that the outcomes-based pricing that Sierra is using implicitly acknowledges something most SaaS models are still ignoring: coordination between a company and its customer carries a real cost floor, and for incentives to be aligned the pricing needs to track the actual work of reconciliation, which is not a function seat count! The insistence on vertical specialization also resonates for me for reasons beyond domain expertise. A system that tries to serve every vertical identically gets pulled toward total consensus, which then overwrites the information that made each vertical distinctive. There is most likely a productive regime that sits in between: enough coherence to coordinate, enough divergence to carry meaning. As a gut check to this, would be interesting to know whether Bret has generally found that Sierra's vertical agents are developing qualitatively different failure modes across industries, or whether breakdowns still follow common patterns regardless of domain.

I feel like Bret must have a visceral understanding of something I've been calling the "synchronization tax" — the cost of building a shared description of the problem and solution, which is asymmetric and scales non-linearly with the number of people involved. Basically, the grounding of Brooks's Law in thermodynamics and information theory. symmetrybroken.com/main…

One implication of these asymmetric dynamics, however, is that there's going to be some push and pull over who gets the accumulating context. If you own both, whether it's a flat directory of markdown or an MCP server that provides your model context is kind of a technical detail. Not so if you're negotiating with another company over access to that context. The natural tendency will be for whomever has accumulated the most relevant context to drag their counterparty over to their platform, bit this isn't necessarily healthy for the ecosystem over the longer term.

But I feel some tension with Bret's framing of "scaling empathy" through AI agents. If empathy is ≈ genuine mutual modeling between agents, then it's expensive precisely because it requires each party to maintain a representation of the other's perspective. What AI agents seem to scale is resolution *without* mutual human understanding — where there is already overlap in perspectives, basically search on steroids. The customer gets their problem solved; the company gets a resolved ticket. Neither needs to understand the other's full context. Whether this counts as "empathy" or as a more efficient form of coordination-without-consensus is worth considering. In my experience, there is a "synchronization tax" to scaling empathy (or trust, which is mutual empathy?) that isn't captured in the search costs alone.

symmetrybroken.com/main…

Mar 10
at
3:39 PM
Relevant people

Log in or sign up

Join the most interesting and insightful discussions.