Evolving Beyond MAU: Rethinking the North Star at Buffer (Part 1 of 2)
- Simon Heaton
- Sep 21
- 7 min read

About this series
This is a two-part series on rethinking North Star metrics at Buffer. In Part 1 we unpack why Monthly Active Users (MAU) stopped serving us, and in Part 2 we share how we redefined our North Star around Weekly Active Users (WAU). Together, these posts explore how metrics shape strategy, retention, and long-term growth. I've written these with help from my colleague Julian Winternheimer, who was instrumental throughout this process.
At Buffer, we’ve always tried to anchor our product and growth strategy around delivering real value to our users. But not long ago, we hit a quiet turning point: our most important success metric, Monthly Active Users (MAU), was no longer telling us the full story.
What began as a reliable indicator of adoption had slowly drifted out of sync with how people were actually using the product. While MAU was holding steady (it was even growing) our core engagement loop, publishing content, was quietly slipping. It was a metric that looked healthy on the surface, but in reality, it was masking deeper issues with retention and user behaviour.
This realization prompted a bigger conversation within the team about how we define success, and, more importantly, how the metrics we choose shape what we build, how we grow, and what we celebrate.
This post is the first in a two-part series on rethinking North Star metrics, based on our journey to redefine MAU at Buffer.
In Part 1, I’ll unpack why MAU began to fail us and the hidden risks of overly broad definitions.
In Part 2, I’ll share the details around how we rebuilt our metric around behaviour and habit.
The role of North Star metrics
North Star metrics act as the compass for product, growth, and marketing teams.
When chosen well, they provide clarity and direction, helping teams align around what truly matters. A strong North Star connects directly to the core value your product delivers to users. It doesn’t just track progress; it shapes strategy, roadmaps, and day-to-day decisions.
This kind of alignment is critical. Teams need a shared understanding of what success looks like and which behaviours drive it. The right metric focuses attention, creates accountability, and helps everyone prioritize the work that moves the needle.
But when a North Star becomes disconnected from real user value, it can do the opposite: mislead, distract, or even incentivize the wrong things.
That’s what we found ourselves grappling with at Buffer.
Note: I won’t be going deep into the fundamentals of North Stars here, but if you’re interested in the broader framework, I highly recommend this article by Reforge that covers the topic in-depth.
The problem with “active”
“Active Users” on a rolling timeframe is the go-to signal of product health for many SaaS teams. This was no different at Buffer, where we defined a Monthly Active User (MAU) as any user who completed at least one “Key Action” in our product over the past 30 days.
It was a practical, straightforward definition during our early growth, but as our product evolved, that definition began to fall apart.
Over time, Buffer’s offering has evolved to serve more diverse needs, from individual creators to teams, from publishing to commenting and analytics, and from posting to planning. The product is now highly feature-rich, which is a shift that’s allowed us to serve our users in more ways.
Our MAU definition grew alongside this product evolution, incorporating new actions as a means to quantify the team’s perceived value created by our extended cornerstone features of content creation, analytics, engagement, and more. Part of this was that each product team had their own definition of what “active” meant, but there was no clarity on the core value action. A clear example of shipping the org.
At a glance, our MAU looked strong: steady growth, consistent upward trend. But under the hood, the story was more complicated. Because of the complexity of the definition, it was never clear what caused movement in the metric. One would always have to examine its parts, which not only made finding an answer harder but also meant taking time away from other important work.
When we disaggregated the data, we saw something worrying: the number of users actually scheduling and sending posts, our core engagement loop, was quietly declining. In other words, MAU was rising, but it wasn’t rising in the way that mattered.


The problem was that in our attempt to quantify all perceived user value, MAU had unintentionally become an overly complex and bloated composite metric. It bundled behaviours that were only loosely related, some central to our value proposition, others more peripheral. That complexity masked critical trends and gave a false sense of momentum.
“We were growing, just not exactly in the way we intended.”
MAU’s catch-all definition was no longer helping us build a habit-forming product. It was distorting how we understood user engagement, making experiments harder to evaluate, and pulling focus away from the behaviours most predictive of long-term retention.
And while this example is specific to Buffer, this isn’t a challenge specific to our team. Many SaaS businesses face similar risks of metric value dilution when their solution or feature set starts to grow. As teams add new features or support broader use cases, there’s a natural temptation to capture more in a single number (we created this feature, so it must equal user value right?).
But unless that number stays tightly connected to the product’s core value and retention loops, it can end up incentivizing shallow behaviours and hiding the real signals of growth (or lack thereof).
The real cost of misleading metrics
On the surface, MAU was doing its job: it showed usage was growing. But once we started peeling back the layers, we realized it was giving us a false sense of progress. The real issue wasn’t MAU as a concept (we all need to track active usage somehow), but how we had defined it.
Composite metrics like this can be helpful when you want a broad view of engagement across a wide product surface. But they carry real risks:
They dilute your focus. When different user actions count equally, your team may optimize for what’s easiest to move, not what matters most.
They create false positives. You may see growth in your metric while your core product retention is actually weakening.
They shape incentives. If your North Star says any activity is good activity, your onboarding, messaging, and product roadmap may start reflecting that.
For us, that meant we were often trying to improve activation and retention by encouraging any key action, instead of reinforcing the real habit that drives long-term value for our users: publishing content.
"Composite metrics aren’t wrong, but they require discipline. Without it, they can blur the line between usage and value, and quietly lead your strategy off course."
When a North Star becomes a mirage
A good North Star metric isn’t just a measurement tool; it’s a decision-making anchor for the entire organization.
It shapes how teams prioritize, how they evaluate progress, and what they believe success looks like. So when that metric drifts away from your product’s core value, the consequences have the potential to ripple across the company.
At Buffer, we’d built roadmaps, growth models, and activation experiments around MAU. It was central to how we evaluated performance, set goals, and communicated success. But once we realized that MAU was no longer capturing the behaviours that truly mattered, it became clear that our strategic compass had subtly shifted off course.
The danger with a misaligned North Star is that it doesn’t just measure the wrong things: it can actually incentivize them. Teams naturally optimize for what’s measured. And for us, that meant:
Onboarding flows encouraging users to try all actions across our solutions over those associated with early and high-retention behaviours
Experiments and feature launches evaluating success based on misaligned engagement metrics, shadowing true impact to our bottom line
Lifecycle messaging focused on breadth rather than depth, treating users as a homogenous cohort
Worse, because MAU was still generally trending upward, it created false confidence. We were growing something. We just weren’t realizing that what was growing was not the behaviour that actually sustained user value over time.
Metrics should evolve with your product, your customers, and your strategy. What made sense two years ago might be a distraction today, and we should always be willing to use new evidence to update our beliefs.
Especially in tech, where things evolve quickly, teams need to periodically step back and ask: Is this still the right thing to measure? Does it reflect the value we’re trying to deliver now, not just the value we delivered before? Are we all still rowing in the right direction?
We had to reassess. And in doing so, we found clarity not just in a better metric, but in a renewed shared understanding of what users value and what truly matters for retention.
The turning point for MAU
The realization that MAU was no longer serving us didn’t come from a single moment; it emerged gradually, then all at once.
Over months, the signs had been building: post-publishing was declining, retention wasn’t improving as we expected, and our core activation loop was stagnant. But during Collaboration Week (a period of time in 2024 where the team was encouraged to bring forward any “meaty” topics), we finally paused long enough to confront the problem directly.
We kicked off a broader team conversation: was our North Star still aligned with how our product created value today?

That discussion unlocked a deeper exploration. With support from our data and product teams, we started breaking down:
Which behaviours actually predicted long-term retention?
What usage patterns signalled habit formation?
How did natural usage frequencies change what we could see and act on?
We also re-evaluated how we defined “active.” A user logging in or adding an idea wasn’t the same as someone publishing content. If our goal was to build and reinforce consistent publishing habits, our metric had to reflect that.
These conversations weren’t about perfection, but they were about precision. Out of this work came the early shape of a new direction: simplify the definition, tighten the time horizon, and centre our North Star on content publishing: the most direct and habit-forming expression of Buffer’s value.
In Part 2, I’ll walk through how we actually redefined our North Star metric, from a broad composite to a behaviourally specific, weekly signal, and the changes it set in motion across our product, strategy, and culture. Expect more practical examples, detailed analyses, and tips for anyone looking to revisit their North Stars.


Comments