Love the practical advice on the trunk-based development and feature flags.
There's one aspect of the release speed/cadence that you did not mention explicit, and I did a few weeks ago, that is the idea that "control decays exponentially with each day that passes after a release".
I make the argument that we should have metric for software deliveries: days since last release, where with each day that passes, we have MORE risk of not being able to release.
I enjoyed the article, it was very thought-provoking. Thank you for sharing, Vasco!
With your definition, I'd say we can also call control a "measure of confidence," which indeed drops almost immediately after the running-tested-released cycle.
One countermeasure that can be applied is a well-thought-out observability built into the software/feature as part of the definition of done. It does not prevent the confidence from decreasing but keeps it from disappearing entirely after a few months.
It wouldn't help necessarily before releases, but after it as a counterbalance to the decay of confidence.
Let's say that a feature has monitoring and alerting in place for its reliability, performance, sudden drops in usage, and maybe even funnel analysis.
Such observability helps with the confidence that whatever we have now works, and in case of future releases, it helps with confidence that if something goes wrong, we will be able to catch it immediately.
Of course, observability will not help if we are not able to release at all because of a loss of control in the development (process) itself.
"Shipping software should be like breathing — constant, natural, and low-drama." — Given that I recently spent quite some time focusing on my breath, that's 100% accurate. :)
Love the practical advice on the trunk-based development and feature flags.
There's one aspect of the release speed/cadence that you did not mention explicit, and I did a few weeks ago, that is the idea that "control decays exponentially with each day that passes after a release".
I make the argument that we should have metric for software deliveries: days since last release, where with each day that passes, we have MORE risk of not being able to release.
Here's that post: https://open.substack.com/pub/vascoduarte/p/how-to-control-a-software-project?r=3a1rs5&utm_campaign=post&utm_medium=web&showWelcomeOnShare=true
I'd love to read your thoughts on that short article :)
I enjoyed the article, it was very thought-provoking. Thank you for sharing, Vasco!
With your definition, I'd say we can also call control a "measure of confidence," which indeed drops almost immediately after the running-tested-released cycle.
One countermeasure that can be applied is a well-thought-out observability built into the software/feature as part of the definition of done. It does not prevent the confidence from decreasing but keeps it from disappearing entirely after a few months.
I'm interested in what you mean by that observability strategy, and how would it help before release?
It wouldn't help necessarily before releases, but after it as a counterbalance to the decay of confidence.
Let's say that a feature has monitoring and alerting in place for its reliability, performance, sudden drops in usage, and maybe even funnel analysis.
Such observability helps with the confidence that whatever we have now works, and in case of future releases, it helps with confidence that if something goes wrong, we will be able to catch it immediately.
Of course, observability will not help if we are not able to release at all because of a loss of control in the development (process) itself.
Ok. Now I understand. Yes, that makes sense.
Big fan of trunk-based development.
One technique that I would suggest to put in practice to achieve this is using TDD.
"Shipping software should be like breathing — constant, natural, and low-drama." — Given that I recently spent quite some time focusing on my breath, that's 100% accurate. :)
Great article, Sam, and thanks for the mention!