7 Comments

Great article! Fascinating how this same idea is growing among people that don't even know themselves. We have the luck Substack allows us to meet.

Recently I wrote an article with very similar message:

https://maxpiechota.substack.com/p/customer-success-driven-metrics-for

I would very appreciate to hear your thoughts.

I share your article above in my list with definition of skillset for Product Engineer:

https://maxpiechota.substack.com/p/what-methodologiesframeworks-the

Expand full comment

Interesting! I absolutely support your thoughts. I struggle with the same problem in vehicle development projects as well. I think about differentiating the KPIs in effectiveness and efficiency KPIs. When we look at efficiency, we often measure only the input, because it is so difficult to measure the output. For the effectiveness, I struggle to distinguish between the impact of engineering and the impact of other functions on the customer value. What do you think?

Expand full comment

The most successful (and enjoyable) products/features I worked on focused on a business outcome with teams having multiple roles in them (devs, PMs, UX, customer support) selected by the actual needs of the project.

Each time, the team was "measured" by achievements in the desired outcome, not by the number of partial outputs they produced (e.g., adoption rate vs number of pull requests).

On an individual level, the reality is much more complicated than what measurements, such as the number of tickets done, the number of PRs closed, etc., can capture. Measuring them to identify outliers might still be important, but not to evaluate individuals' performance. For that, things like "360 feedback" from team members and stakeholders of respective projects are much more valuable (at least to me).

Expand full comment

That's why you create multidisciplinary team, even as far as the developer and the sales guy in one team. They work together towards the metric, without the friction between siloes and functions

Expand full comment

Wow thanks a lot for this powerful information shared it’s so inspiring and has help me know the basic truth of what to do and focus on with respect to metrics tracking. However my question is in all this outcome metrics named and explained I do not still know how to calculate them in practice and what tools or dashboard to use to go about implementing them in real life can you help me with that, especially for things like ,

1) Feature Adoption rate what practical way can I calculate it with my team.

2) System Reliability also what practical way can I calculate it with my team

3) same goes for error rate? I just need any solution formula to use if any or possible dashboard filter like in Jira for example which can help track them

Thanks for your understanding and fast reply.

Expand full comment

Hi! Thank you for your comment. I'm glad you enjoyed the article.

I have good experience with two tools for engineering and product metrics; each is a bit different, but they complement each other nicely.

The first one is DataDog. It's effectively an industry standard for engineering monitoring and observability. It is great for system reliability, error rates, monitoring, and alerting.

As an example of reliability calculation, you can start in a simple way by picking your most important user scenarios or API endpoints and calculate reliability over some time period as:

100 * (number of successful scenarios) / (number of successful + number of failed scenarios) -> this will give you a percentage, let's say 97%, which is the reliability of that scenario. Then error rate can be:

100 - reliability (in the example above 3%) over the same time period.

In DataDog you can build both dashboards and monitoring on top of such calculations. It scales well because you can create Terraform templates for it.

For product-oriented metrics, Amplitude can be a good pick. Feature adoption can be as simple as:

100 * (number of users using the feature)/(overall number of active users) over a time period. Then, you can observe on a dashboard whether the percentage increases over time or not. Of course, this can also be done in DataDog, but Amplitude is good in segmentations, funnels, longer data retention, etc., if you need that.

Of course, you can go with ELK stack, Prometheus, or similar for on-prem solutions. It all depends on the needed scale and budget.

Expand full comment

Thanks for this detailed reply and explanation with examples I deeply appreciate your help!!! I will look into DataDog and the others to learn more…

Expand full comment