3 Comments
User's avatar
Shashwat's avatar

The uncomfortable pattern the data points to isn't a tools problem, it's a metrics hierarchy problem. As GTM tools matured and gave teams more granular visibility, they optimized harder around the wrong variables. CAC dropped on the dashboard while unit economics quietly tanked. Win rates improved while deal velocity slowed. The tools didn't fail, teams just got better at moving the metrics that showed up in their dashboards. The 78 to 47 percent collapse isn't because GTM lost sophistication, it's because sophistication was pointed at the wrong targets. Segment-level profitability coverage tells a totally different story than channel attribution ever could.

marcus cauchi's avatar

Reading this, it doesn’t feel like GTM suddenly forgot how to perform. It feels like it stopped being able to see clearly.

When a system can’t reliably tell signal from noise, activity can rise while effectiveness quietly falls, and nobody is obviously doing anything wrong.

At that point optimisation doesn’t fix the problem. It speeds it up, because you’re optimising against evidence that no longer lines up with outcomes.

I’m curious if others are feeling that gap between what the numbers say and what actually seems to be happening?

ayse guvencer's avatar

I can't help but wonder if the old, religious frameworks and funnels and measurement were already always flawed but we kept going because it suited a narrative of control despite being counterfactual?

I seem to think that is the case - this is more apparent now since the outside, volatile variables have shifted significantly forcing us to change our logic drastically.