Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletter Hub
Free Learning
Arrow right icon
timer SALE ENDS IN
0 Days
:
00 Hours
:
00 Minutes
:
00 Seconds
Arrow up icon
GO TO TOP
Hands-On Software Engineering with Golang

You're reading from   Hands-On Software Engineering with Golang Move beyond basic programming to design and build reliable software with clean code

Arrow left icon
Product type Paperback
Published in Jan 2020
Publisher Packt
ISBN-13 9781838554491
Length 640 pages
Edition 1st Edition
Languages
Tools
Arrow right icon
Author (1):
Arrow left icon
Achilleas Anagnostopoulos Achilleas Anagnostopoulos
Author Profile Icon Achilleas Anagnostopoulos
Achilleas Anagnostopoulos
Arrow right icon
View More author details
Toc

Table of Contents (21) Chapters Close

Preface 1. Section 1: Software Engineering and the Software Development Life Cycle
2. A Bird's-Eye View of Software Engineering FREE CHAPTER 3. Section 2: Best Practices for Maintainable and Testable Go Code
4. Best Practices for Writing Clean and Maintainable Go Code 5. Dependency Management 6. The Art of Testing 7. Section 3: Designing and Building a Multi-Tier System from Scratch
8. The Links 'R'; Us Project 9. Building a Persistence Layer 10. Data-Processing Pipelines 11. Graph-Based Data Processing 12. Communicating with the Outside World 13. Building, Packaging, and Deploying Software 14. Section 4: Scaling Out to Handle a Growing Number of Users
15. Splitting Monoliths into Microservices 16. Building Distributed Graph-Processing Systems 17. Metrics Collection and Visualization 18. Epilogue
19. Assessments 20. Other Books You May Enjoy

A list of software development models that all engineers should know

The software engineering definition from the previous section alludes to the fact that software engineering is a complicated, multi-stage process. In an attempt to provide a formal description of these stages, academia has put forward the concept of the SDLC.

The SDLC is a systematic process for building high-quality software that matches the expectations of the end user or customer while ensuring that the project's cost stays within a reasonable bound.

Over the years, there has been an abundance of alternative model proposals for facilitating software development. The following diagram is a timeline illustrating the years when some of the most popular SDLC models were introduced:

Figure 1: A timeline for the software development models that will be presented in this chapter

In the upcoming sections, we will explore each of the preceding models in more detail.

Waterfall

The waterfall model is probably the most widely known model out there for implementing the SDLC. It was introduced by Winston Royce in 1970 [11] and defines a series of steps that must be sequentially completed in a particular order. Each stage produces a certain output, for example, a document or some artifact, that is, in turn, consumed by the step that follows.

The following diagram outlines the basic steps that were introduced by the waterfall model:

  • Requirement collection: During this stage, the customer's requirements are captured and analyzed and a requirements document is produced.
  • Design: Based on the requirement's document contents, analysts will plan the system's architecture. This step is usually split into two sub-steps: the logical system design, which models the system as a set of high-level components, and the physical system design, where the appropriate technologies and hardware components are selected.
  • Implementation: The implementation stage is where the design documents from the previous step get transformed by software engineers into actual code.
  • Verification: The verification stage follows the implementation stage and ensures that the piece of software that got implemented actually satisfies the set of customer requirements that were collected during the requirements gathering step.
  • Maintenance: The final stage in the waterfall model is when the developed software is deployed and operated by the customer:
Figure 2: The steps defined by the waterfall model

One thing to keep in mind is that the waterfall model operates under the assumption that all customer requirements can be collected early on, especially before the project implementation stage begins. Having the full set of requirements available as a set of use cases makes it easier to get a more accurate estimate of the amount of time that's required for delivering the project and the development costs involved. A corollary to this is that software engineers are provided with all the expected use cases and system interactions in advance, thus making testing and verifying the system much simpler.

The waterfall model comes with a set of caveats that make it less favorable to use when building software systems. One potential caveat is that the model describes each stage in an abstract, high-level way and does not provide a detailed view into the processes that comprise each step or even tackle cross-cutting processes (for example, project management or quality control) that you would normally expect to execute in parallel through the various steps of the model.

While this model does work for small- to medium-scale projects, it tends, at least in my view, not to be as efficient for projects such as the ones commissioned by large organizations and/or government bodies. To begin with, the model assumes that analysts are always able to elicit the correct set of requirements from customers. This is not always the case as, oftentimes, customers are not able to accurately describe their requirements or tend to identify additional requirements just before the project is delivered. In addition to this, the sequential nature of this model means that a significant amount of time may elapse between gathering the initial requirements and the actual implementation. During this time what some would refer to as an eternity in software engineering terms the customer's requirements may shift. Changes in requirements necessitate additional development effort and this directly translates into increased costs for the deliverable.

Iterative enhancement

The iterative enhancement model that's depicted in the following diagram was proposed in 1975 by Basili and Victor [2] in an attempt to improve on some of the caveats of the waterfall model. By recognizing that requirements may potentially change for long-running projects, the model advocates executing a set of evolution cycles or iterations, with each one being allocated a fixed amount of time out of the project's time budget:

Figure 3: The steps of the interactive enhancement model

Instead of starting with the full set of specifications, each cycle focuses on building some parts of the final deliverable and refining the set of requirements from the cycle that precedes it. This allows the development team to make full use of any information available at that particular point in time and ensure that any requirement changes can be detected early on and acted upon.

One important rule when applying the iterative model is that the output of each cycle must be a usable piece of software. The last iteration is the most important as its output yields the final software deliverable. As we will see in the upcoming sections, the iterative model has exerted quite a bit of influence in the evolution of most of the contemporary software development models.

Spiral

The spiral development model was introduced by Barry Boehm in 1986 [5] as an approach to minimize risk when developing large-scale projects associated with significant development costs.

In the context of software engineering, risks are defined as any kind of situation or sequence of events that can cause a project to fail to meet its goals. Examples of various degrees of failure include the following:

  • Missing the delivery deadline
  • Exceeding the project budget
  • Delivering software on time, depending on the hardware that isn't available yet

As illustrated in the following diagram, the spiral model combines the ideas and concepts from the waterfall and iterative models with a risk assessment and analysis process. As Boehm points out, a very common mistake that people who are unfamiliar with the model tend to make when seeing this diagram for the first time is to assume that the spiral model is just a sequence of incremental waterfall steps that have to be followed in a particular order for each cycle. To dispel this misconception, Boehms provided the following definition for the spiral model:

"The spiral development model is a risk-driven process model generator that takes a cyclic approach to progressively expand the project scope while at the same time decreasing the degree of risk."

Under this definition, risk is the primary factor that helps project stakeholders answer the following questions:

  • What steps should we follow next?
  • How long should we keep following those steps before we need to reevaluate risk?
Figure 4: The original spiral model, as published by Boehm in 1986

At the beginning of each cycle, all the potential sources of risk are identified and mitigation plans are proposed to address any risk concerns. These set of risks are then ordered in terms of importance, for example, the impact on the project and the likelihood of occurring, and used as input by the stakeholders when planning the steps for the next spiral cycle.

Another common misconception about the spiral model is that the development direction is one-way and can only spiral outward, that is, no backtracking to a previous spiral cycle is allowed. This is generally not the case: stakeholders always try to make informed decisions based on the information that's available to them at a particular point in time. As the project's development progresses, circumstances may change: new requirements may be introduced or additional pieces of previously unknown information may become available. In light of the new information that's available to them, stakeholders may opt to reevaluate prior decisions and, in some cases, roll back development to a previous spiral iteration.

Agile

When we talk about agile development, we usually refer to a broader family of software development models that were initially proposed during the early 90s. Agile is a sort of umbrella term that encompasses not only a set of frameworks but also a fairly long list of best practices for software development. If we had to come up with a more specific definition for agile, we would probably define it as follows:

"Agile development advocates building software in an incremental fashion by iterating in multiple, albeit relatively, short cycles. Making use of self-organizing and cross-functional teams, it evolves project requirements and solutions by fostering intra-team collaboration."

The popularity of agile development and agile frameworks, in particular, skyrocketed with the publication of the Manifesto for Agile Software Development in 2001 [3]. At the time of writing this book, agile development practices have become the de facto standard for the software industry, especially in the field of start-up companies.

In the upcoming sections, we will be digging a bit deeper into some of the most popular models and frameworks in the agile family. While doing a deep dive on each model is outside the scope of this book, a set of additional resources will be provided at the end of this chapter if you are interested in learning more about the following models.

Lean

Lean software development is one of the earliest members of the agile family of software development models. It was introduced by Mary and Tom Poppendieck in 2003 [10]. Its roots go back to the lean manufacturing techniques that were introduced by Toyota's production system in the 70s. When applied to software development, the model advocates seven key principles.

Eliminate waste

This is one of the key philosophies of the lean development model. Anything that does not directly add value to the final deliverable is considered as a blocker and must be removed.

Typical cases of things that are characterized as waste by this model are as follows:

  • Introduction of non-essential, that is, nice-to-have features when development is underway.
  • Overly complicated decision-making processes that force development teams to remain idle while waiting for a feature to be signed off in other words: bureaucracy!
  • Unnecessary communication between the various project stakeholders and the development teams. This disrupts the focus of the development team and hinders their development velocity.

Create knowledge

The development team should never assume that the customers' requirements are static. Instead, the assumption should always be that they are dynamic and can change over time. Therefore, it is imperative for the development team to come up with appropriate strategies to ensure that their view of the world is always aligned with the customer's.

One way to achieve this is by borrowing and implementing some facets of other models, such as the iterative model we discussed in the previous section, or by tweaking their workflows accordingly so that deliverables are always built in an incremental fashion and always with an up-to-date version of the customer's requirements.

Externally acquired knowledge is, of course, only half of the equation; the development teams themselves are also another source of knowledge. As teams collaborate to deliver a piece of software, they discover that certain approaches and practices work better together than others. In particular, some approaches accelerate the team's development velocity, while others hinder it. Due to this, it is important for teams to capture this bit of tacit knowledge, internalize it, and make it available to other teams in the future. One way to achieve this is by making another team for the teams to sync up, reflect on their workflows, and discuss any potential issues.

Defer commitment

As with all the models in the agile family, the lean model is devoid of any attempt to force project stakeholders into making all the required decisions at the beginning of the project. The reasoning behind this is quite simple: people are more likely to be convinced that change is needed when they have not already committed to a particular set of actions.

The lean model actively encourages stakeholders to defer all the important and potentially irreversible decisions until a later stage in the project's development.

Build in quality

One of the primary reasons for project delays is undoubtedly the accumulation of defects. Defects have a definite impact on the development team's velocity as members often need to pause their current work to chase down and fix potentially field-critical bugs that were introduced by a previous development iteration.

The lean model prompts engineering teams to aggressively focus on following agile practices such as test- or behavior-driven development (TDD/BDD) in an attempt to produce lean, well-tested code with fewer defects. The benefits of this recommendation have also been corroborated by research that's been performed by Turhan and others [13].

Deliver fast

Every engineering team out there would probably agree that they would like nothing more than delivering the piece of software they are currently working on as fast as possible to the hands of the customer or the end user. The most common factors that prevent teams from delivering software fast are as follows:

  • Over-analyzing the business requirements
  • Over-engineering the solution to fit those requirements
  • Overloading the development team

Congruent to the philosophy of lean development, teams must iterate quickly, that is, they must build a solution as simple as possible, present it to the target customer as early as possible, and collect useful feedback that's used to incrementally improve the solution in subsequent iterations.

Respect and empower people

Lean development endeavors to improve the development teams' working environment by filtering out unneeded sources of distraction that increase the cognitive load on engineers and can eventually lead to burnouts.

What's more, by discouraging micro-management and encouraging teams to self-organize, team members can feel more motivated and empowered. The Poppendiecks believe that engaged and empowered people can be more productive; ergo, they can bring more value to the team and, by extension, to the company that they are a part of.

See and optimize the whole

In Lean Software Development: An Agile Toolkit, Mary and Tom Poppendieck use a stream-based analogy to describe the software development process. By this definition, each stage of the development process can be treated as a potential generator of value (a value stream) for the business. The Poppendiecks claim that in order to maximize the value that flows through the various stages of development, organizations must treat the development process as a sequence of inter-linked activities and optimize them as a whole.

This is one of the most common pitfalls that organizations fall into when attempting to apply lean thinking concepts. You have probably heard of the old adage miss the forest for the trees. Many organizations, under the influence of other lean model principles such as quick delivery, focus all their efforts on optimizing a particular aspect of their development process. To the casual external observer, this approach seems to pay off in the short term. In the long term, however, the team is vulnerable to the negative side effects of sub-optimization.

To understand how sub-optimization can affect the team's performance in the long run, let's examine a hypothetical scenario: in an attempt to iterate faster, the development team takes a few shortcuts, that is, they push out less than stellar code or code that is not thoroughly tested. While the code does work, and the customer's requirements are being met, it also increases the complexity of the code base with the unavoidable side effect that more defects start creeping into the code that is delivered to the customer. Now, the development team is under even more pressure to fix the bugs that got introduced while maintaining their previous development velocity at the same time. As you can probably deduce, by this point, the development team is stuck in a vicious circle, and certainly one that is not easy to escape from.

On the other side of the spectrum, a popular and successful example of applying the concepts of whole system optimization in the way that's intended by the lean development model is Spotify's squad-based framework. Spotify squads are lean, cross-functional, multi-disciplined, and self-organizing teams that bring together all the people who are needed to take a feature through all the stages of development, from its inception to final product delivery.

Scrum

Scrum is hands-down the most widely known framework of the agile family and the go-to solution for many companies, especially the ones working on new products or the ones that actively seek to optimize their software development process. In fact, Scrum has become so popular that, nowadays, several organizations are offering Scrum certification courses. It was co-created by Ken Schwaber and Jeff Sutherland and initially presented to ACM's object-oriented programming, systems, languages, and applications conference in 1995.

As a process framework, Scrum is meant to be applied by cross-functional teams working on large projects that can be split into smaller chunks of work, where each chunk normally takes between two to four weeks also known as a sprint in Scrum terminology to complete.

Contrary to the other software development models we've discussed so far, Scrum does not explicitly advocate a particular design process or methodology. Instead, it promotes an empirical, feedback loop type of approach: initially, the team comes up with an idea on how to proceed based on the information available at the time. The proposed idea is then put to the test for the next sprint cycle and feedback is collected. The team then reflects on that feedback, refines the approach further, and applies it to the following sprint.

As more and more sprint cycles go by, the team learns to self-organize so that it becomes more efficient at tackling the task at hand. By improving the quality of communication between the team members while at the same time reducing distractions, teams often observe a boost in the team's output, also known as team velocity in agile terminology.

One important thing to keep in mind is that while this chapter examines Scrum from the perspective of a software engineer, the Scrum process and principles can also be applied when working on other types of projects that do not involve software development. For instance, Scrum can also be used to run marketing campaigns, hire personnel, or even tackle construction projects.

Scrum roles

When applying the Scrum framework to a software development team, each member can be mapped to one of the following three roles:

  • The Product Owner (PO)
  • The Development Team Member
  • The Scrum Master (SM)

The official Scrum guide [12], which is freely available to download online in over 30 languages, defines the PO as the key stakeholder in a project, that is, the person who maximizes the product's value resulting from the work of the development team.

The primary responsibility of the PO is to manage the project backlog. The backlog is just a formal way of referring to the list of tasks that need to be completed for a particular project and includes new features, enhancements, or bug fixes for upcoming development cycles.

The PO must always make sure that all the backlog entries are described in a clear, consistent, and unambiguous way. Furthermore, the backlog's contents is never assumed to be static but should always be assumed to be dynamic: new tasks may be introduced while existing tasks may be removed to facilitate changes to the project requirements while development is underway. This adds an extra responsibility to the role of the PO: they need to be able to respond to such changes and reprioritize the backlog accordingly.

The development team comprises a set of individuals who implement the tasks that have been selected from the backlog. According to the basic tenets of Scrum, the team should be as follows:

  • It should be cross-functional, bringing people together from different disciplines and varying skill sets
  • It should not pay attention to the job titles of its members and focus on the work that's performed instead
  • It should be aligned toward a single goal: completing the set of tasks that the team committed to at the beginning of each sprint

The last but equally important Scrum role is that of the SM. The SM supports both the PO and the development team members by ensuring that everyone has a clear understanding of not only the team goals but also the various Scrum processes. The SM is also responsible for organizing and running the appropriate Scrum events (ceremonies) as and when required.

Essential Scrum events

Scrum prescribes a sequence of events that are specially designed to aid teams in becoming more agile and boosting their performance. Let's take a brief look at the list of essential Scrum events for the purpose of software development.

The first Scrum event that we will be examining is the planning session. During planning, the team examines the items from the backlog and commits to a set of tasks that the team will be working on during the next sprint.

As you probably expect, the team needs to periodically sync up so that all the team members are on the same page with respect to the tasks that other team members are currently working on. This is facilitated by the daily stand-up, a time-boxed session that usually takes no longer than 30 minutes. Each team member speaks in turn and briefly answers the following questions:

  • What was I working on yesterday?
  • What will I be working on today?
  • Are there any blockers for completing a particular task?

Blockers, if left unresolved, could jeopardize the team's goal for the sprint. Therefore, it is of paramount importance to detect blockers as early as possible and engage the team members to figure out ways to work around or address them.

At the end of a sprint, the team usually holds a retrospective session where team members openly discuss the things that went right, as well as the things that went wrong, during the sprint. For each problem that's encountered, the team attempts to identify its root cause and propose a series of actions to remedy it. The selected actions are applied during the next sprint cycle and their effect is re-evaluated in the next retrospective.

Kanban

Kanban, whose name loosely translates from Japanese as a visual signal or a billboard, is yet another very popular type of agile framework that has been reportedly in use at Microsoft since 2004. One of the iconic features of the Kanban model is, of course, the Kanban board, a concept outlined by David Anderson's 2010 book [1] that introduces the idea behind this particular model.

The Kanban board allows team members to visualize the set of items that teams are working on, along with their current state. The board is comprised of a series of vertically oriented work lanes or columns. Each lane has its own label and a list of items or tasks attached to it. As items or tasks are being worked on, they transition between the various columns of the board until they eventually arrive at a column that signals their completion. Completed items are then typically removed from the board and archived for future reference.

The standard lane configuration for software development consists of at least the following set of lanes:

  • Backlog: A set of tasks to be worked on by the team in the near future
  • Doing: The tasks in progress
  • In review: Work that has been put up for review by other team members
  • Done: Items that have been completed

It is only logical that each team will customize the lane configuration to fit their particular development workflow. For example, some teams may include an in test column to keep track of items undergoing QA checks by another team, a deployed column to track items that have been deployed to production, and even a blocked column to specify tasks that cannot proceed without the team taking some type of action.

I am sure that most of you will probably already be familiar with the physical implementation of a Kanban board: a dedicated spot on the office wall filled with colorful post-it notes. While local teams tend to enjoy having the board on a wall as it makes it quite easy to see what everyone is working on or to identify blockers just by walking by the board, this approach obviously cannot support partially or fully remote teams. For those use cases, several companies are offering the online, digital equivalent of a Kanban board that can be used instead.

DevOps

DevOps is the last software development model that we will be examining in this chapter. Nowadays, more and more organizations endeavor to scale out their systems by transitioning from monolithic to service-oriented architectures (SoA). The basic premise behind the DevOps model is that each engineering team owns the services they build. This is achieved by fusing development with operations, that is, the aspects involved in deploying, scaling, and monitoring services once they get deployed to production.

The DevOps model evolved in parallel with the other agile models and was heavily influenced by the principles put forward by the lean development model. While there is no recommended approach to implementing DevOps (one of the reasons why Google came up with SRE in the first place), DevOps advocates tend to gravitate toward two different models:

  • Culture, Automation, Measurement, and Sharing (CAMS)
  • The three ways model

The CAMS model

CAMS was originally invented by Damon Edwards and John Willis. Let's explore each one of these terms in a bit more detail.

As with other agile models, corporate culture is an integral part of DevOps methodology. To this end, Edwards and Willis recommend that engineering teams extend the use of practices such as Scrum and Kanban to manage both development and operations. Culture-wise, an extremely important piece of advice that Edwards and Willis offered is that each company must internally evolve its own culture and set of values that suit its unique set of needs instead of simply copying them over from other organizations because they just seem to be working in a particular context. The latter could lead to what is known as the Cargo Cult effect, which eventually creates a toxic work environment that can cause issues with employee retainment.

The second tenet of the CAMS model is automation. As we discussed in a previous section, automation is all about eliminating potential human sources of errors when executing tedious, repetitive tasks. In the context of DevOps, this is usually accomplished by doing the following:

  • Deploying a CI/CD system to ensure that all the changes are thoroughly tested before they get pushed to production
  • Treating infrastructure as code and managing it as such, that is, storing it in a version control system (VCS), having engineers review and audit infrastructure changes, and finally deploying them via tools such as Chef (https://www.chef.io/), puppet (https://puppet.com/), Ansible (https://www.ansible.com/), and Terraform (https://www.terraform.io/)

The letter M in CAMS stands for measurement. Being able to not only capture service operation metrics but also act on them offers two significant advantages to engineering teams. To begin with, the team can always be apprised of the health of the services they manage. When a service misbehaves, the metrics monitoring system will fire an alert and some of the team members will typically get paged. When this happens, having access to a rich set of metrics allows teams to quickly assess the situation and attempt to remedy any issue.

Of course, monitoring is not the only use case for measuring: services that are managed by DevOps teams are, in most cases, long-lived and therefore bound to evolve or expand over time; it stands to reason that teams will be expected to improve on and optimize the services they manage. High-level performance metrics help identify services with a high load that need to be scaled, while low-level performance metrics will indicate slow code paths that need to be optimized. In both cases, measuring can be used as a feedback loop to the development process to aid teams in deciding what to work on next.

The last letter in the CAMS model stands for sharing. The key ideas here are as follows:

  • To promote visibility throughout the organization
  • To encourage and facilitate knowledge sharing across teams

Visibility is quite important for all stakeholders. First of all, it allows all the members of the organization to be constantly aware of what other teams are currently working on. Secondly, it offers engineers a clear perspective of how each team's progress is contributing to the long-term strategic goals of the organization. One way to achieve this is by making the team's Kanban board accessible to other teams in the organization.

The model inventors encourage teams to be transparent about their internal processes. By allowing information to flow freely across teams, information silos can be prevented. For instance, senior teams will eventually evolve their own streamlined deployment process. By making this knowledge available to other, less senior, teams, they can directly exploit the learnings of more seasoned teams without having to reinvent the wheel. Apart from this, teams will typically use a set of internal dashboards to monitor the services they manage. There is a definite benefit in making these public to other teams, especially ones that serve as upstream consumers for those services.

At this point, it is important to note that, in many cases, transparency extends beyond the bounds of the company. Lots of companies are making a subset of their ops metrics available to their customers by setting up status pages, while others go even further and publish detailed postmortems on outages.

The three ways model

The three ways model is based on the ideas of Gene Kim, Kevin Behr, George Spafford [8], and other lean thinkers such as Michael Orzen [9]. The model distills the concept of DevOps into three primary principles, or ways:

  • Systems thinking and workflow optimization
  • Amplifying feedback loops
  • Culture of continuous experimentation and learning

Systems thinking implies that the development team takes a holistic approach to software: in addition to tackling software development, teams are also responsible for operating/managing the systems that the software gets deployed to and establishing baselines for not only the target system's behavior but also for the expected behavior of other systems that depend on it:

Figure 5: Thinking of development as an end-to-end system where work flows from the business to the customer/end user

The preceding diagram represents this approach as a unidirectional sequence of steps that the engineering team executes to deliver a working feature to the end user or customer in a way that does not cause any disruption to existing services. At this stage, the team's main focus is to optimize the end-to-end delivery process by identifying and removing any bottlenecks that hinder the flow of work between the various steps.

Under the first principle, teams attempt to reduce the number of defects that flow downstream. Nevertheless, defects do occasionally slip through. This is where the second principle comes into play. It introduces feedback loops that enable information to flow backward, as shown in the following diagram, that is, from right to left. By themselves, however, feedback loops are not enough; they must also serve as amplification points to ensure that the team members are forced to act on incoming information in a timely fashion. For example, an incoming alert (feedback loop) will trigger a person from the team who is on call to get paged (amplification) so as to resolve an issue that affects production:

Figure 6: Utilizing feedback loops to allow information to flow backward

The final principle, and the one that most agile models are imbued with, has to do with fostering a company culture that allows people to pursue experiments and improvement ideas that may or may not pan out in the end as long as they share what they've learned with their colleagues. The same mindset also applies when dealing with incidents that have adverse effects on production systems. For instance, by holding blameless postmortems, the team members can outline the root causes of an outage in a way that doesn't put pressure on the peers whose actions caused the outage and, at the same time, disseminate the set of steps and knowledge that were acquired by resolving the issue.

You have been reading a chapter from
Hands-On Software Engineering with Golang
Published in: Jan 2020
Publisher: Packt
ISBN-13: 9781838554491
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime