Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Accelerate DevOps with GitHub

You're reading from   Accelerate DevOps with GitHub Enhance software delivery performance with GitHub Issues, Projects, Actions, and Advanced Security

Arrow left icon
Product type Paperback
Published in Sep 2022
Publisher Packt
ISBN-13 9781801813358
Length 540 pages
Edition 1st Edition
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Michael Kaufmann Michael Kaufmann
Author Profile Icon Michael Kaufmann
Michael Kaufmann
Arrow right icon
View More author details
Toc

Table of Contents (31) Chapters Close

Preface 1. Part 1: Lean Management and Collaboration
2. Chapter 1: Metrics That Matter FREE CHAPTER 3. Chapter 2: Plan, Track, and Visualize Your Work 4. Chapter 3: Teamwork and Collaborative Development 5. Chapter 4: Asynchronous Work: Collaborate from Anywhere 6. Chapter 5: The Influence of Open and Inner Source on Software Delivery Performance 7. Part 2: Engineering DevOps Practices
8. Chapter 6: Automation with GitHub Actions 9. Chapter 7: Running Your Workflows 10. Chapter 8: Managing Dependencies Using GitHub Packages 11. Chapter 9: Deploying to Any Platform 12. Chapter 10: Feature Flags and the Feature Lifecycle 13. Chapter 11: Trunk-Based Development 14. Part 3: Release with Confidence
15. Chapter 12: Shift Left Testing for Increased Quality 16. Chapter 13: Shift-Left Security and DevSecOps 17. Chapter 14: Securing Your Code 18. Chapter 15: Securing Your Deployments 19. Part 4: Software Architecture
20. Chapter 16: Loosely Coupled Architecture and Microservices 21. Chapter 17: Empower Your Teams 22. Part 5: Lean Product Management
23. Chapter 18: Lean Product Development and Lean Startup 24. Chapter 19: Experimentation and A|B Testing 25. Part 6: GitHub for your Enterprise
26. Chapter 20: GitHub – The Home for All Developers 27. Chapter 21: Migrating to GitHub 28. Chapter 22: Organizing Your Teams 29. Chapter 23: Transform Your Enterprise 30. Other Books You May Enjoy

Engineering velocity

How does your company measure developer velocity? The most common approach is effort. There used to be some companies that used metrics such as lines of code or code test coverage, but those are obviously bad choices, and I'm not aware of any company today that still does this. If you can solve a problem in one line of code or in 100 lines of code, one line is obviously preferable since every line comes with a maintenance cost. The same goes for code test coverage. The coverage itself says nothing about the quality of the tests, and bad tests also introduce additional maintenance costs.

Note

I try to keep the wording agnostic to the development method. I've seen teams adopt DevOps practices that use Agile, Scrum, Scaled Agile Framework (SAFe), and Kanban, but also Waterfall. But every system has its own terminology, and I try to keep it as neutral as possible. I talk about requirements and not user stories or product backlog items, for example, but most of the examples I use are based upon Scrum.

The most common approach to measure developer velocity is by estimating requirements. You break down your requirements into small items—such as user stories —and the product owner assigns a business value. The development team then estimates the story and assigns a value for its effort. It doesn't matter if you use story points, hours, days, or any other number. It's basically a representation of the effort that is required to deliver the requirement.

Measuring velocity with effort

Measuring velocity with estimated effort and business value can have side effects if you report the numbers to management. There is some kind of observer effect: people try to improve the numbers. In the case of effort and business value, that's easy—you can just assign bigger numbers to the stories. And this is what normally happens, especially if you compare the numbers across teams: developers will assign bigger numbers to the stories, and product owners will assign bigger business value.

While this is not optimal for measuring developer velocity, it also does no big harm if the estimation is done in the normal conversation between the team and the product owner. But if the estimation is done outside your normal development process, estimates can even be toxic and have very negative side effects.

Toxic estimates

The search for the answer to the question How much will it cost? for a bigger feature or initiative normally leads to an estimation outside the normal development process and before a decision to implement it. But how do we estimate a complex feature and initiative?

Everything we do in software development is new. If you had done it already, you could use the software instead of writing it anew, so even a complete rewrite of an existing module is still new as it uses a new architecture or new frameworks. Something that has never been done before can only be estimated to a limited certainty. It's guessing, and the larger the complexity, the bigger the cone of uncertainty (see Figure 1.2).

Figure 1.2 – The cone of uncertainty

Figure 1.2 – The cone of uncertainty

The cone of uncertainty is used in project management and its premise is that at the beginning of a project, cost estimation has a certain degree of uncertainty that then is reduced due to rolling planning until it is zero at the end of the project. The x axis is normally the time taken, but it can also relate to complexity and abstraction: the more abstract and complex a requirement is, the bigger the uncertainty in estimation.

To better estimate complex features or initiatives, these are broken down into smaller parts that can better be estimated. You also need to come up with a solutions architecture as part of the work breakdown. Since this is done outside the normal development process and in time upfront and outside the context, it has some unwanted side effects, as outlined here:

  • Normally, the entire team is not present. This leads to less diversity, less communication, and therefore less creativity when it comes to problem-solving.
  • The focus is on finding problems. The more problems you can detect beforehand, the more accurate your estimates probably are. In particular, if you treat estimates later to measure performance, people learn fast that they can buy more time if they find more problems and can therefore add higher estimates to the requirements.
  • If in doubt, the engineers who are assigned with the task of estimation take the more complex solution. If, for example, they are not sure if they can solve a problem with an existing framework, they might consider writing their own solution to be on the safe side.

If these numbers were only used by management to decide upon the implementation of a feature, it would not do that much harm. But normally, the requirements—including the estimates and the solution architecture—are not thrown away and later are used to implement features. In this case, there is also a less creative solution visible that is optimized for problems and not for solutions. This inevitably leads to less creativity and outside-the-box thinking when implementing features.

#NoEstimates

Estimates are not bad. They can be valuable if they take place at the right time. If the development team and the product owner discuss the next stories, estimates can help to drive the conversation. If the team plays, for example, planning poker to estimate user stories and the estimates differ, this is an indication that people have different ideas on how to implement it. This can lead to valuable discussion and may be more productive, as you can skip some stories with a common understanding. This is also true for the business value. If the team does not understand why the product owner assigns a very high or very low number, this can also lead to important discussions. Maybe the team already knows a way how to achieve a successful outcome, or there are discrepancies in the perception of different personas.

But many teams feel more comfortable without estimating the requirements at all. This is often referred to under the hashtag #noestimates. Especially in highly experimental environments, estimation is often considered a waste of time. Remote and distributed teams also often prefer not to estimate. They often take discussions from in-person meetings to discussions on issues and pull requests (PRs). This also helps when documenting the discussions and helps teams to work in a more asynchronous way, which can help to bridge different time zones.

With developer velocity off the table, teams should be allowed to decide on their own if they want to estimate or not. This also might change over time. Some teams gain value from this, while some do not. Let teams decide what works for them and what doesn't work.

The correct way to estimate high-level initiatives

So, what is the best way to estimate more complex features or initiatives so that the product owner can decide if these are worth implementing? Get the entire team together and ask the following question: Can this be delivered in days, weeks, or months? Another option is to use an analogy estimation and compare the initiative to something that has already been delivered. The question is, then: Is this initiative smaller, equal, or more complex than the previous one delivered?

The most important thing is not to break the requirements down or to already lay out a solution architecture—what is important is just the gut feeling of all engineers. Then, have everyone assign a minimum and a maximum number for the unit. For the analogy estimation, use percentages relative to the original initiative and calculate the results using historical data.

The easiest way to report this would look like this:

Given the current team,
if we prioritize the initiative <initiative name>,
the team is confident to deliver the feature in between <smallest minimum> and <highest maximum>

Taking the smallest minimum and the highest maximum value is the safest way, but it can also lead to distorted numbers if the pessimistic and optimistic estimates are far apart. In this case, the average might be the better number to take, as illustrated here:

Given the current team,
if we prioritize the initiative <initiative name>,
the team is confident to deliver the feature in between <average minimum> and <average maximum>

But taking the average (the arithmetic mean; in Excel, =AVERAGE() is used for this) means having a higher or lower deviation, depending on the distribution of the single estimates. The higher the deviation, the less confident you really can be that you can deliver that feature in that period. To get an idea of how your estimates are distributed, you can calculate the standard deviation (=STDEV.P() in Excel). You can look at the deviation for the minimum and the maximum, but also the estimate of each member. The smaller the deviation, the closer the values are to the average. Since standard deviations are absolute values, they cannot be compared with other estimations. To have a relative number, you can use the coefficient of variation (CV): the standard deviation divided by the average, typically represented as a percentage (=STDEV.P() / AVERAGE() in Excel). The higher the value, the more distributed the values from the average; the lower the value, the more confident each team member is with their estimates or the entire team is with regard to minimum and maximum. See the example in the following table:

Table 1.1 –

Table 1.1 – Example for calculating estimations

To express uncertainty in the deviation of the values, you can add a confidence level to the estimation. This can be text (such as low, medium, or high) or a percentage level, as illustrated here:

Given the current team,
if we prioritize the initiative <initiative name>,
the team is <confident level> confident to deliver the feature in <arithmetic mean>

I don't use a fixed formula here because this would involve knowing the team. If you look at the data in the example (Table 1.1), you can see that the average of the minimum (2,7) and the maximum (6,3) are not so far away. If you look at the individual team members, you can see that there are more pessimistic and optimistic members. If past estimations confirm this, it gives you very high confidence that the average is realistic, even if the minimum and maximum values have a pretty high CV. Your estimate could look like this:

Given the current team,
if we prioritize the initiative fancy-new-thing,
the team is 85% confident to deliver the feature in 4.5 months"

This kind of estimation is not rocket science. It has nothing to do with complex estimation and forecasting systems such as the three-point estimation technique (https://en.wikipedia.org/wiki/Three-point_estimation), PERT distribution (https://en.wikipedia.org/wiki/PERT_distribution), or the Monte Carlo simulation method (https://en.wikipedia.org/wiki/Monte_Carlo_method), and they all depend upon a detailed breakdown of the requirements and an estimation on a task (work) level. The idea is to avoid planning upfront and breaking down the requirements and relying more on the gut feeling of your engineering team. The technique here is just to give you some insights into the data points you collect across your team. It's still just guessing.

From developer to engineering velocity

Effort is not a good metric for measuring developer velocity, especially if it is based upon estimates, and in cross-functional teams, velocity does not only depend upon developers. So, how do you shift from a developer velocity to an engineering velocity?

You have been reading a chapter from
Accelerate DevOps with GitHub
Published in: Sep 2022
Publisher: Packt
ISBN-13: 9781801813358
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at €18.99/month. Cancel anytime