Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Embracing Microservices Design
Embracing Microservices Design

Embracing Microservices Design: A practical guide to revealing anti-patterns and architectural pitfalls to avoid microservices fallacies

Arrow left icon
Profile Icon Mehboob Ahmed Khan Profile Icon Siddiqui Profile Icon Timothy Oleson
Arrow right icon
S$29.99 S$42.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.6 (14 Ratings)
eBook Oct 2021 306 pages 1st Edition
eBook
S$29.99 S$42.99
Paperback
S$52.99
Subscription
Free Trial
Arrow left icon
Profile Icon Mehboob Ahmed Khan Profile Icon Siddiqui Profile Icon Timothy Oleson
Arrow right icon
S$29.99 S$42.99
Full star icon Full star icon Full star icon Full star icon Half star icon 4.6 (14 Ratings)
eBook Oct 2021 306 pages 1st Edition
eBook
S$29.99 S$42.99
Paperback
S$52.99
Subscription
Free Trial
eBook
S$29.99 S$42.99
Paperback
S$52.99
Subscription
Free Trial

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning
Table of content icon View table of contents Preview book icon Preview Book

Embracing Microservices Design

Chapter 1: Setting Up Your Mindset for a Microservices Endeavor

Microservices is an architectural style that structures an application into multiple services that encapsulate business capabilities. These microservices are usually much smaller, both in terms of scope and functionality, compared to traditional services. They have logical and physical separation, and they communicate with each other via messages over a network to form an application.

Adopting the microservice architecture is a journey that requires a mindset that can embrace changes in culture, processes, and practices. In essence, organizations need to go through several changes to increase agility and adaptability to achieve their goals and deliver business value.

In this chapter, we will help you understand the philosophy of microservices and the various aspects that help organizations with their digital transformation. We will also learn about microservice design principles and their components, along with their benefits and challenges. Finally, we will discuss the role of leadership and how to get started by adopting widely accepted practices in the industry.

The following topics will be covered in this chapter:

  • Philosophy of microservices
  • Microservice design principles
  • Building teams to deliver business value faster
  • Benefits of microservices
  • Challenges of microservices
  • Microservice architecture components
  • Reviewing leadership responsibilities
  • Defining core priorities for a business
  • Using the twelve-factor app methodology
  • Additional factors for modern cloud-native apps

This chapter aims to help you understand the key concepts of microservices and their potential for bringing change that can fuel business growth for organizations. In addition, we will discuss the role of leadership in shifting the mindset by building effective teams that embrace change to deliver business value faster. Finally, we will dive into the "Twelve-Factor App" methodology, which has helped many organizations in their successful adoption of microservices.

Philosophy of microservices

The digital revolution is disrupting every industry to fulfill the unmet demands of its users and embrace digital transformation to transform their current businesses. Digital transformation allows businesses to adapt quickly to the changing business conditions and create value for their end users. Digital transformation is about empowering your employees, engaging your customers, transforming products, and optimizing operations. The essence of digital transformation is in a growth mindset, where organizations invest in improving their internal capabilities, processes, and systems to bring change that drives business and societal outcomes. Many of these systems are due for a change that would enable them to innovate at a rapid pace and reimagine the future to deliver exciting experiences. In the last few decades, most of these systems were built using a monolithic architecture, which is relatively difficult to change and maintain as it grows and becomes complex. Though the Service-oriented architecture (SOA) was a major improvement over the monolithic architecture, it has its own challenges, such as when the application grows, it becomes a distributed monolithic application, which is again difficult to maintain or extend. The focus of SOA is more on reusability and integration, where services constantly exchange messages in return for tasks to be executed. The messaging platform is the backbone of SOA and is responsible for service discovery, orchestration, routing, message transformation, message enrichment, security, and transaction management. The major downside of SOA is that the information is shared and knowledge of the business domain is scattered across the Enterprise Service Bus (ESB), which makes it difficult to change the service.

Microservices is an evolution of the architecture style that addresses the pain points of other architecture styles to enable rapid change and scale. Microservices also enable continuous delivery of business capabilities. The effort of concentrating on and segregating business capabilities from each other in individual services enables organizations to build systems that are modular, isolated, and loosely coupled in nature. These characteristics play a crucial role in helping organizations build dedicated teams that are focused on delivering engineering velocity. Teams are isolated to develop and deploy microservices independently, without any major collaboration required. However, if there are dependencies between services and the services haven't been modeled properly, they undergo extensive collaboration, violating the isolation as a benefit. The microservice architecture also enables organizations to build services with autonomy that embraces change and lowers the risk of failures because of service independence.

Microservices have changed the application development paradigm, where new fundamentals have surfaced and are successful at building and operationalizing microservices. Organizations need to start with the right mindset and build a culture of ownership across small teams to continuously deliver value to their end users.

Now that you have a basic understanding of the philosophy behind microservices, next, we will go through microservice design principles.

Microservice design principles

In the quest of building microservices, you will have to make several choices that will give a different flavor to your microservice architecture and its evolution over time. Microservice design principles provide the guidelines for evaluating key decisions that can affect the design of your microservice-based architecture. In principle, microservices support loose coupling and the modularity of services. Other principles can govern the design of a microservice architecture, but their importance can vary. We will cover those principles in the following sections.

Single responsibility principle and domain-driven design

A microservice should be responsible for offering a single feature or a group of related features to deliver a business capability. The only reason why a microservice interface should change is due to changes in the business capabilities offered by the microservice. This helps systems to be designed following the real-world domains and helps us visualize systems and architectures as a translation of real-world problems. Domain-driven design is an approach that helps with domain modeling and defining microservice boundaries, which helps us achieve modularity and reduces coupling.

Encapsulation and interface segregation

Each microservice owns its data, and the only way a service can communicate with other services is through well-defined interfaces. These interfaces are carefully designed by keeping their clients in mind. Rather than overloading microservice endpoints in client applications, a popular alternative is to introduce an API gateway, which will be explained later in the chapter. This technique is useful in delegating the communication responsibility to API gateways and keeping microservices focused on delivering business capabilities.

Culture of autonomy, ownership, and shared governance

The microservice architecture allows business capabilities owned by different teams to be delivered. These teams can work independently without requiring much collaboration across teams. Each team doesn't need to be assigned a single business capability; instead, they may choose from a related set of capabilities that belong to a single domain. The microservice architecture flourishes when you allow teams to have autonomy, since they can choose what they think is right to deliver the desired business capability. This doesn't mean that teams can do anything, though, but it certainly gives them the freedom to make decisions under an umbrella of formally agreed principles. These principles are called shared governance, which provides consensus across the teams regarding how they want to address different cross-cutting concerns, or how far they want to go to try different technologies. The team that builds the microservice owns it and is responsible for operationalizing it. Such cross-cutting concerns will be covered in detail in Chapter 7, Cross-Cutting Concerns.

Independently deployable

A single microservice should be independently deployable, allowing teams to roll out changes without them affecting other microservices. As part of shared governance, teams should continue to look for technologies and practices that can help them achieve better independence. This characteristic is extremely useful for operationalizing microservices at a large scale.

Culture of automation

Automation is an important concept that promotes the idea of finding opportunities for replacing manual steps with scripted programs to achieve consistency and reduce overhead. Continuous integration, continuous deployment, automated testing, and infrastructure automation are all different forms of automation that can help you reduce the overall overhead of managing microservices.

Designing for failures

Microservices are designed to be highly available and scalable. These services do fail but they should be designed to recover fast. The idea is that the failure of one microservice should not affect other microservices, which helps avoid cascading failures and allows the service to recover as fast as possible to restore the required functionality. Designing services for failure requires having a good understanding of user behavior and expectations. The goal is to keep the system responsive in case of service unavailability, alongside reduced functionality. However, service failures are inevitable and require a pragmatic approach that enables experimentation to find system vulnerabilities. This can be achieved with controlled experimentation that creates chaos to determine how the system would behave differently in different circumstances. This approach is classified as chaos engineering.

With chaos engineering, you experiment with specific areas of the system to identify their weaknesses. Chaos engineering is a practice where you intentionally introduce failures to find undiscovered system issues. This way, you can fix them before they occur unexpectedly and affect your business and users. This exercise also helps in understanding the risks and impacts of turbulent conditions, incident response, gaps in observability, as well as the team's ability to respond to incidents. Many tools can be used to perform chaos engineering, such as litmus, Gremlin, and a few others.

Observability

Observability is a capability that you build as part of a microservice architecture to allow you to identify and reason about the internal state of your system. It also helps with monitoring, debugging, diagnosing, and troubleshooting microservices and their core components in a production environment. In a microservice architecture, you collect logs, metrics, and traces to help teams analyze service behavior. Another important aspect of observability is distributed tracing, which helps in understanding the flow of events across different microservices. In practice, metrics are more important than other forms of monitoring. Many organizations invest a sizable number of resources in metrics, ensuring that they are available in real time, and then use them as the main tool for troubleshooting production issues.

In the next section, we will uncover some core fundamentals of microservices to build teams that are independent and autonomous for bringing agility.

Building teams to deliver business value faster

The microservice architecture encourages the idea of building small, focused, and cross-functional teams to deliver business capabilities. This requires a shift in mindset, led by team restructuring. In 1967, Melvin E. Conway introduced the idea:

"Any organization that designs a system will inevitably produce a design whose structure is a copy of the organization's communication structure."

- Conway's Law

Therefore, if an organization needs to build software systems that are loosely coupled, then they need to make sure that they build teams that have a minimum dependency to allow them to function with autonomy.

A team is defined as a cohesive and long-lived group of individuals working together on the same set of business problems, owning a problem space/business area, and having sufficient capacity to build and operate. In essence, the team needs to fully understand the piece of software they are building and operating. This helps them build confidence in the software, optimize lead time, and deploy more frequently to bring agility. In practice, the team's size varies, depending on the size and complexity of the service. Adding a new member to a team adds new connections, resulting in communication overhead, which affects productivity. Organizing teams based on features and components is considered a standard approach to building teams to facilitate collaboration and bring cross-functional skills together. Recently, Matthew Skelton and Manuel Pais introduced a structure that lowers the cognitive load of agile teams by presenting fundamental team topologies in their book Team Topologies. There are four fundamental team topologies, as discussed here:

  • Stream-aligned team: The stream-aligned team is a cross-functional team that's aligned to the value stream of the business and is responsible for delivering customer value with minimal dependencies. They know their customers and apply design thinking to understand business requirements to address customer needs. They support their service in production and are focused on continuous improvement and quality of service. The stream-aligned team is long-lived and has complete ownership of the service. They are responsible for building, operating, diagnosing, and supporting these services during their life cycles.
  • Enabling team: The role of the enabling team is to provide support and guidance to stream-aligned teams in adopting tools, technologies, and practices that can help them perform better. Enabling teams are formed to identify areas of improvement and promote the learning mindset by keeping the wider organization aware of new technologies and best practices. Depending on their purpose, enabling teams can be short-lived or long-lived in an organization. However, their engagement with stream-aligned teams is only for a short period. Once the required capability is achieved, the enabling teams move to support other teams.
  • Complicated subsystem team: The complicated subsystem teams are formed to address the requirements of complex subsystems, which can add significant cognitive load to stream-aligned teams and affect their ability to stay focused on the business domain. They are responsible for developing and operationalizing these systems to ensure their service availability when it comes to consuming other value streams.
  • Platform team: The platform team is responsible for providing infrastructure support to enable stream-aligned teams to deliver a continuous stream of value. They ensure that the capabilities offered by the platform are compelling and provide a good developer and operator experience for consuming and diagnosing these platform services. These services should be accessible via APIs, thus enabling self-service and automation. The platform team is aligned to the product life cycle and guarantees that the stream-aligned teams are getting the right support on time to perform their duties. These services are usually bigger and supported by larger teams.

When you adopt a microservice with DevOps, individuals are responsible for handling both the development and operations of a microservice. These microservices are built with the guidance provided by product managers and product owners. In the real world, teams start small until they grow big and add complexity, before being subdivided into multiple teams to ensure autonomy and independence. For example, your application may need to build multiple channels to interact with its customers and want to work with teams with capabilities around iOS and Android, or they may have to add a notification functionality that needs a separate team altogether. In these teams, individuals have a shared responsibility, where design decisions are usually led by senior engineers and discussed with the team before implementation.

Now, let's learn about the advantages that microservices bring.

Benefits of microservices

One of the key benefits of microservices is the "shift of complexity." We managed to shift the complexity of monolithic applications to multiple microservices and further reduce the individual complexity of the microservice with bounded contexts, but we also increased more complexity to operationalize it. This shift in complexity is not bad, as it allows us to evolve and standardize automation practices to manage our distributed systems well. The other benefits of microservices will be discussed next.

Agility

The microservice architecture promotes experimentation, which allows teams to deliver value faster and help organizations create a competitive edge. Teams iterate over a small piece of functionality to see how it affects the business outcome. Teams can then decide if they want to continue with the changes or whether they should discard the idea to try new ones. Teams can develop, deploy, maintain, and retire microservices independently, which helps them become more productive and agile. Microservices give teams the autonomy of trying out new things in isolation.

Maintainability

The microservice architecture promotes building independent, fine-grained, and self-contained services. This helps developers build simple and more maintainable services that are easy to understand and maintain throughout their life cycle. Every microservice has its own data and code base. This helps in minimizing dependencies and increasing maintainability.

Scalability

The microservice architecture allows teams to independently scale microservices based on demand and forecast, without affecting performance and adding significant cost compared to monolithic applications. The microservice architecture also offers a greater deal of parallelism to help with consistent throughput to address increasing load.

Time to market

The microservice architecture is pluggable, so it supports being able to replace microservices and their components. This helps teams focus on building new microservices to add or replace business capabilities. Teams no longer wait for changes from different teams to be incorporated before releasing new business capabilities. Most of the cross-cutting concerns are handled separately, which helps teams in achieving faster time to market. These cross-cutting concerns will be covered in detail in Chapter 7, Cross-Cutting Concerns.

Technology diversity

The microservice architecture allows teams to select the right tools and technologies to build microservices rather than locking themselves in with decisions they've made in the past. Technology diversity also helps teams with experimentation and innovation.

Increased reliability

The microservice architecture is distributed in nature, where individual microservices can be deployed multiple times across the infrastructure to build redundancy. There are two important characteristics of the microservice architecture that contribute to the increased reliability of the overall system:

  • Isolating failures by running each microservice in its own boundaries
  • Tolerating failures by designing microservices to gracefully address the failures of other microservices

So far, we've learned about the benefits of microservices. However, implementing microservices does bring a few challenges that need to be considered. We will look at these in the next section.

Challenges of microservices

In this section, we will discuss the different challenges you may face as you embark on your microservices journey.

Organizational culture

One of the major hurdles in moving to microservices is the organizational culture, where teams were initially built around technical capabilities rather than delivering business capabilities. This requires an evolving organization, restructuring teams, and changing legacy practices.

Adoption of DevOps practices

DevOps provides a set of practices that combines development and operations teams to deliver value. The basic theme is to shorten the development life cycle and provide continuous delivery with high software quality. Adopting DevOps practices is important for any organization to bring changes faster to market, increase deployment frequency, lower the change failure rate, bring a faster mean time to recover, and a faster lead time for change that delivers value to end users. There are various products that support implementing DevOps practices. Azure DevOps is one of the most popular tools that provides an end-to-end DevOps toolchain. Also, ensure that the necessary changes are made to change management processes, including change control and approval processes so that they align with DevOps practices.

Architectural and operational complexity

The microservice architecture is distributed in nature, which presents several challenges compared to a monolithic architecture. There are more moving parts, and more expertise is required for teams to manage them. A few of those challenges are as follows:

  • Unreliable communication across service boundaries: Microservices are heavily dependent on the underlying network. Therefore, the network infrastructure has to be designed properly and governed to address the needs of communication, as well as to protect the infrastructure from unwelcome events.
  • Network congestion and latency: Network congestion is a temporary state of a network that doesn't have enough bandwidth to allow traffic flows. Due to this network congestion, different workloads may experience delayed responses or partial failure, resulting in high latency.
  • Data integrity: In a microservice architecture, a single business transaction may span multiple microservices. Due to any transient or network failure, if any service fails, it may affect some part of the transaction that creates data inconsistencies across different microservices, resulting in data integrity issues.

Service orchestration and choreography

There is no single way of specifying how different services communicate with each other and how the overall system works. Orchestration introduces a single point of failure by controlling the interaction of different services, while choreography promotes the idea of smart endpoints and dump pipes, with a potential downside of introducing cycling dependencies. In choreography, microservices publish and consume messages from the message broker, which helps the overall architecture to be more scalable and fault-tolerant.

Observability

Managing, monitoring, and controlling microservices at scale is a difficult problem. You need to have good observability in place to understand the interaction and behavior of different parts of the system. You need to collect metrics, logs, call stacks, raise alerts, and implement distributed tracing to help you reason about the system.

End-to-end testing

End-to-end testing the microservices is more challenging as it requires reliable and effective communication to bring all the teams on board. End-to-end testing can also hamper your release frequency. Setting up a test environment is difficult and requires coordination across teams.

Double mortgage period

If you are migrating from a monolithic application to a microservice, then you need to live in a hybrid world for a while to support both the legacy and the new application. This is not easy and requires careful planning when it comes to fixing bugs and adding new features, thus affecting agility and productivity.

Platform investment

Organizations need to invest in platform teams that are responsible for building platform services and addressing cross-cutting concerns. These cross-cutting concerns will be covered in detail in Chapter 7, Cross-Cutting Concerns. These teams also build tools and frameworks to help other teams get the job done.

The previous sections have given us a good overview of microservices, along with their design principles, advantages, and challenges. We'll now have a look at the various architecture components of microservices and their interaction when it comes to building scalable and robust applications.

Microservice architecture components

Microservices are a loosely coupled set of services that cooperate to achieve a common goal. Besides microservices, there are other components that play a vital role in a microservice architecture. The set of components that help establish the foundation of microservices are shown in the following diagram:

Figure 1.1 – A microservice architecture with its components

Figure 1.1 – A microservice architecture with its components

The preceding diagram demonstrates the interaction of different microservice architecture components. These microservices are hosted on an orchestration platform, responsible for ensuring self-healing and the high availability of microservices. These microservices then communicate with each other using orchestration or choreography patterns. In orchestration, a microservice is responsible for invoking other microservice interfaces while in choreography, messages are exchanged using an event bus. A client can consume microservices via an API gateway, where messages are relayed to a specific microservice for actual processing. Each team has full autonomy in choosing the right storage mechanism for their microservices, as depicted in the preceding diagram, where different microservices are using a SQL database, an in-memory database, and a NoSQL database for their data storage needs.

We will discuss each component in the following sections.

Messages

Messages contain information that's needed for microservices to communicate with each other. These messages help microservice architecture be loosely coupled. Messages are further classified as commands or events. Commands usually contain more information for the recipient, with the expectation of then being notified about the delivery of messages. Events are lightweight and are mainly used as notification mechanisms, without any expectations from the consumer. These messages can be synchronous or asynchronous.

Persistence and state management

Data handling is an important aspect of microservices. Most of the microservices need to persist data/state. In a microservice architecture, data is decentralized, where every microservice has the responsibility and autonomy of managing its own data. Teams have full autonomy in selecting the right database technology for their microservices to achieve the desired business outcome.

Orchestration

An orchestrator is responsible for placing microservice instances on compute infrastructure. The orchestrator is also responsible for identifying failures and scaling microservices to maintain the high availability and resiliency of the overall architecture.

Service discovery

Service discovery is an integral part of the microservice architecture and helps in locating other microservices to enable collaboration. At a given instance, a microservice may have multiple instances running in a production environment. These instances are dynamically provisioned to address updates, failures, and scaling demands. The service discovery component plays an important role in keeping the microservice architecture highly discoverable by allowing new instances to be registered and become available for service.

API gateway

The API gateway acts as a reverse proxy, responsible for routing requests to the appropriate backend services to expose them in a controlled manner to the outside world for consumption. It also provides robust management and security features that are useful in protecting backend services from malicious actors. The API gateway will be covered in detail in Chapter 7, Cross-Cutting Concerns.

Now, let's explore the role of leadership while initiating a microservices endeavor.

Reviewing leadership responsibilities

In this section, we will understand the responsibilities associated with different roles in an organization looking to adopt the culture of continuous delivery of value.

What business/technology leaders must know

Any organization's business leaders must define and communicate the vision of the organization and focus on the necessary grassroot transformation. They must identify growth opportunities, market trends, competition, and address potential risks to create an environment that fuels innovation by bringing new business models to life. Innovation that's driven by a feedback loop creates a culture where new ideas are heard, designed, iterated, and refined. They should focus on value creation through active learning, gaining insights, and experimentation. As business values change over time, this demands a change in business priorities, which should be clearly defined and communicated across the organization so they can be used as the guiding principle to make future decisions.

Technology leaders are responsible for articulating the technology strategy and driving technology initiatives across the organization. They are also responsible for building capabilities by investing in people, processes, products, services, platforms, and acquiring business acumen to deliver a competitive edge in a cost-effective manner for customer delight. Technology leaders play an important role in leading the evolution of business systems, understanding the challenges, and building capabilities to continuously innovate. They should also look outside their organization and find opportunities for collaboration with teams to ideate and innovate. Furthermore, they ensure service quality by implementing process improvement models with the help of widely adopted frameworks to help deliver high-quality services, addressing a vast majority of customers. CMMI, GEIT, and ITIL are a few of the frameworks that can help in adopting practices around service delivery, development, governance, and operation. The adoption of these frameworks and their practices varies widely in the industry. Smaller organizations can start small and focus on a few of the relevant practices, while the enterprises that may want to achieve a certain level of maturity can explore the entire model.

What architects must know

The role of an architect is to understand the need for changing business drivers and technology and their impact on software architecture. They work closely with the business stakeholders to identify new requirements and formulate plans to accommodate those changes to drive business value. As change is the only constant, an architect should embrace the architectural modularity of software systems to be evolved in isolation. They need to align architecture decisions (trade-offs) with the business priorities and understand their impact on business systems. The architect should demonstrate leadership, mentorship, and coaching across teams to facilitate change. They have the responsibility of enabling software teams by providing the necessary learning opportunities to acquire the necessary skills to deliver effectively. These learning opportunities are not only limited to technology training and certifications, but special emphasis should be placed on understanding business systems, business domains, and change management. When it comes to agile teams, the role of an architect is usually played by the senior engineers or the development managers.

In practice, it's essential to understand both the business domain and data. These architects should collaborate with business users to understand the business outcomes and emphasize domain-driven design. They should also analyze how different parts of the application are consuming different datasets, to remodel them as microservices. Understanding the usage patterns of the data allows us to either build or choose the right services that support business needs.

The role of the product manager, product owner, and scrum master

Product managers are responsible for crafting the vision of a product by understanding customer needs, business objectives, and the market. They sit at the intersection of business, technology, and user experience, while the product owners work closely with the stakeholders to curate detailed business requirements for the product or feature. They are responsible for translating the product manager's vision into actionable items to help cross-functional teams build the right product. The scrum master is the coach that acts as a mediator between the development team and product owner to ensure that the development team is working in alignment with the product backlog. They foster an environment of high performance and continuous improvement across the team.

So far, we have learned about the role of leadership in shaping the culture to adopt change. Next, we will discuss the importance of setting priorities before starting the microservices journey. Leadership should keep revisiting these priorities to ensure their alignment with business objectives.

Defining core priorities for a business

Since the microservice architecture is gaining momentum, more and more organizations have started thinking about adopting a new way of building applications. These organizations are setting themselves up for a journey. Startups are challenged with limited resources and deciding what to build in-house and what to outsource. Large organizations are looking to adopt new ways to deliver value faster to their end customers. They are building new applications and also looking to transform their legacy applications into microservices to gain the benefits of being more nimble and agile. As they progress through their journey, they will need to make decisions and deal with lots of trade-offs.

To start, it's important to outline the charter based on your business priorities. If the focus is on building new features and increasing team velocity, then innovation should be prioritized over reliability and efficiency. The following diagram depicts different dimensions that need to be prioritized based on business priorities:

Figure 1.2 – Different dimensions of business priorities

Figure 1.2 – Different dimensions of business priorities

If providing a reliable service to your customer is your primary goal, then reliability should be prioritized over innovation and efficiency. If efficiency is more important, then you must focus on making the right changes to become an efficient organization. By doing so, innovation and reliability may need to be addressed later in the value chain. For example, being more efficient enables experimentation at a low cost, which allows teams to try out new things more often.

Technology leaders need to make sure that their team is ready, and they must be aware of cultural change, new ways of communication, learning new tools and technologies, autonomy in making decisions about their service domains, and be open for collaboration to ensure governance. Technology leaders should bring clarity by defining and communicating the right architecture to the teams.

As an architect, you need to make sure that you are capturing all the decisions that are mutually agreed upon, their benefits, and their trade-offs. You are also responsible for documenting and communicating the implications of these decisions. Documenting these decisions will help you look back and find reasons for making those decisions. This will allow you to make more informed decisions in the future. Identifying good candidates for microservices and modeling them around business capabilities is crucial. You should start with services that are not critical to the business and evolve your architecture incrementally.

Building a new application or transforming a legacy application into microservices is a crucial decision for an organization. It's highly unlikely that you should lay out a detailed plan for such initiatives, but the new services should be designed in a way that they should cater to the current and emerging business requirements. However, understanding the current system can help in identifying areas that can deliver the most impact. Another important aspect is observing the usage patterns of different parts of the application to understand how clients and different parts of the application are interacting with each other. For many organizations, modernizing legacy applications and adding new features to address customer needs are competing priorities. There are mainly three approaches that are adopted as teams start converting those applications into microservices:

  • Replacing existing functionality with new microservices that are being built using new tools and frameworks to avoid any technical debt. This requires a huge investment upfront to fully understand the legacy system. It's usually recommended for small legacy systems or a module in a large legacy application that offers limited functionality.
  • Extracting an existing functionality that exists in different modules as microservices. This approach is recommended for legacy applications that are well maintained and structured throughout their life cycle.
  • As an application continues to grow, it's highly unlikely that the initial design principles are followed during its lifespan. In such cases, a balanced approach should be adopted to allow teams to refactor old applications. This helps them clearly define module boundaries, which can later serve as a means of extracting code from microservices.

The microservice architecture is an implementation of distributed computing, where different microservices make use of parallel compute units to achieve resiliency and scale. Over the last decade, cloud providers have been enabling organizations to reap the benefits of distributed computing by eliminating the need for heavy investment upfront. High-performing organizations are focusing on embracing cloud-native development to accelerate their microservices journey.

Scalability and availability are no longer an afterthought for organizations for building enterprise-grade applications. Cloud-native technologies are playing an instrumental role in enabling organizations to incorporate these characteristics while spanning different clouds environments (public, private, or hybrid). Some of the technologies that are contributing to this development are DevOps, microservices, service meshes, and Infrastructure as Code (IaC).

Cloud-native applications are built to make use of cloud services to provide a consistent experience for developing and operationalizing different environments. This approach has allowed organizations to consume Platform-as-a-Service (PaaS) compute infrastructure as a utility and outsource the infrastructure management to cloud vendors. These services are responsible for provisioning the underlying infrastructure, its availability, scalability, and other value-added services. One huge drawback of directly using cloud services without any intermediate abstraction (in the form of a library or framework) is that it makes it harder for organizations to move to different solutions or different cloud providers. Dapr, which stands for Distributed Application Runtime, uses a sidecar pattern to provide such abstraction between microservices and cloud services. Dapr will be covered in detail in Chapter 3, Microservices Architecture Pitfalls.

The cloud-native approach focuses on adopting four major practices:

  • DevOps is the union of people, processes, and products to enable continuous delivery of value to our end users (as defined by Donovan Brown).
  • Continuous delivery is a software development discipline where you build software in such a way that the software can be released to production at any time (as defined by Martin Fowler).
  • Microservices is an architecture style that allows applications to be built as a set of loosely coupled, fine-grained services. These services have logical and physical separation, and they communicate with each other over a network to deliver a business outcome.
  • Containers or containerization is a new way of packaging and deploying applications. The container technology enables packaging applications and their dependencies together to minimize discrepancies across environments.

Software methodology plays an important role in building an application that follows the best practices to deliver software. Twelve-factor app is a widely adopted methodology for building microservices. Let's look closely at the various factors that are essential to understand before starting this journey.

Using the twelve-factor app methodology

The twelve-factor app methodology provides the guidelines for building scalable, maintainable, and portable applications by adopting key characteristics such as immutability, ephemerality, declarative configuration, and automation. Incorporating these characteristics and avoiding common anti-patterns will help us build loosely coupled and self-contained microservices. Implementing these guidelines will help us build cloud-native applications that are independently deployable and scalable. In most cases, failed attempts at creating microservices are not due to complex design or code flaws – they have set the fundamentals wrong from the start by ignoring the widely accepted methodologies. The rest of this section will focus on the 12 factors in light of microservices to help you learn and adopt the principles so that you can implement them successfully.

Code base

The twelve-factor app methodology emphasizes every application having a single code base that is tracked in version control. The application code base may have multiple branches, but you should avoid forking these branches as different repositories. In the context of microservices, each microservice represents an application. These microservices may have different versions deployed across different environments (development, staging, or production) from the same code base. A violation of this principle is having multiple applications in a single repository to either facilitate code sharing or commits that apply to multiple applications, which reduces our ability to decouple them in the long run. A better approach is to refactor and isolate the shared piece of code as a separate library or a microservice. The following diagram shows the collaboration of different developers on a single code base for delivering microservices:

Figure 1.3 – Different developers working together on a single code base

Figure 1.3 – Different developers working together on a single code base

In the previous diagram, a microservice team is working on a single code base to make changes. Once these changes are merged, they are built and released for deployment to different environments (staging and production). In practice, staging may be running a different version than production, although both releases can be tracked from the same version control.

Dependencies

In the Twelve-Factor App, the dependencies should be isolated and explicitly declared via a dependency declaration manifest. The application should not depend on the host to have any of its dependencies. The application and all its dependencies are carried together for every deployment. For example, a common way of declaring Node.js application dependencies is package.json, for Java applications, it's pom.xml, and for .NET Core, it's .csproj. For containerized environments, you can bundle the application with its dependencies using Docker for deployment across environments. The following diagram depicts how application dependencies are managed outside the application's repository and included later as part of container packaging:

Figure 1.4 – Application dependency isolation and its inclusion via a Docker file

Figure 1.4 – Application dependency isolation and its inclusion via a Docker file

Containers help in deploying applications to different environments without the need to worry about installing application dependencies. npm is a package manager that's responsible for managing packages that are local dependencies for a project. These packages are then included as part of container packaging to build container images.

Config

A single code base allows you to deploy your application consistently across different environments. Any application configuration that changes between environments and is saved as constants in the code base should be managed with environment variables. This approach provides the flexibility to scale an application with different configurations as they grow. Connection strings, API keys, tokens, external service URLs, hostnames, IP addresses, and ports are good candidates for configs. Defining config files or managing configs using grouping (development, staging, or production) is error-prone and should be avoided. You should also avoid hard-coding configuration values as constants in your code base. For example, if you are deploying your application in Azure App Service, you can specify configuration values in the Azure App Service instance rather than inside the application's configuration file. The following diagram demonstrates deploying a container image to different environments with different configs:

Figure 1.5 – Application deployed to different environments with different configs

Figure 1.5 – Application deployed to different environments with different configs

A container image represents the template of a microservice and is compromised of application code, binaries, and dependencies. The container image is then deployed to the production and staging environments, along with their configurations, to make sure that the same application is running across all environments.

Backing service

A backing service is a service dependency that an application consumes over a network for its normal operation. These services should be treated as attachable resources. Local services and remote third-party services should be treated the same. Examples include datastores (such as Azure SQL or Cosmos DB), messaging (Azure Service Bus or Kafka), and caching systems (Azure Redis Cache). These services can be swapped by changing the URL or locator/credential in the config, without the need to make any changes to the application's code base. The following diagram illustrates how different backing services can be associated with a microservice:

Figure 1.6 – How an application can access and replace backing services by changing access URLs

Figure 1.6 – How an application can access and replace backing services by changing access URLs

This diagram depicts a running instance of a microservice hosted inside a container. The microservice is consuming different services that are externalized using parameters. Externalizing backing services helps microservices easily replace backing services.

Build, release, and run

The deployment process for each application should be executed in three discrete stages (build, release, and run), as follows:

  • The build stage compiles a particular version of the code base and its assets, and then fetches vendor dependencies to produce build artifacts.
  • The release stage combines the build artifacts with the config to produce a release for an environment.
  • The run stage runs the release in an executing environment by provisioning processes, containers, or services.

As a general practice, it's recommended to build once and deploy the same build artifact to multiple environments. The following diagram demonstrates the process of building and releasing microservices that are ready to be run in different environments:

Figure 1.7 – Creating a release for deployment on an environment

Figure 1.7 – Creating a release for deployment on an environment

Processes

The process is an execution environment that hosts applications. These processes are stateless in nature and share nothing with other processes or hosts. Twelve-factor app uses stateful backing services such as datastores to persist data to make it available across processes or in the case of process failure. Sticky sessions, or storing data in memory or disk that can be reused for subsequent requests, is an anti-pattern. Session states should be externalized either to a database, cache, or any other service. Externalizing the state to a host would be an exception and considered an anti-pattern as it significantly impacts the ability of these processes to run on individual hosts, and it also violates the scalability guidelines of the twelve-factor app methodology.

An example of a stateless process and its stateful backing services, such as Azure Cosmos DB and Azure Redis Cache, is shown in Figure 1.6.

Port binding

Traditionally, web applications are deployed inside web servers such as IIS, Tomcat, and httpd to expose functionality to end users. A twelve-factor web app is a self-contained application that doesn't rely on running instances of a web server. The app should include web server libraries as dependencies that are packaged as build artifacts for deployment. The twelve-factor web app exposes HTTP as a service binding on a port for consumption. The port should be provided by the environment to the application. This is true for any server that hosts applications and any protocol that supports port binding. Port binding enables different apps to collaborate by referring to each other with URLs.

Concurrency

The loosely coupled, self-contained, and share-nothing characteristics of the twelve-factor app methodology allow it to scale horizontally. Horizontal scaling enables adding more servers and spawning more processes with less effort to achieve scalability at ease. The microservice architecture promotes building small single-purpose and diversified services, which provides the flexibility required to scale these services to achieve better density and scalability. There are various managed services available on Azure that provide you with a better capability for configuring auto-scaling based on various metrics. The following diagram demonstrates the scalability patterns of different microservices:

Figure 1.8 – Scalability of different microservices

Figure 1.8 – Scalability of different microservices

At a given instance, a different number of microservice instances are running to meet the user's need. The orchestration platform is responsible for scaling these microservices to support different consumption patterns, which helps with running microservices in an optimized environment.

Disposability

The twelve-factor app methodology should have a quick startup, a graceful shutdown, and be resilient to failure. Having a quick startup time helps with spawning new processes quickly to address spikes in demand. It also helps with moving processes across hosts and replacing failed processes with new ones. A graceful shutdown should be implemented with SIGTERM to allow a process to stop listening to new requests and release any resources before exit. In the case of long-running processes, it's important to make sure that operations are idempotent and can be placed back on the queue for processing. Failures are bound to happen; what's important is how you react to those failures. Both graceful shutdowns and quick startups are important aspects of achieving resilience.

Dev/prod parity

"But it works on my machine."

– Every developer

The twelve-factor app methodology advocates having parity between development, production, and any environment in-between. Dev/prod parity aligns with the shift-left strategy, where issues are identified at an earlier stage to allow more time to resolve them with the least impact. It also helps with debugging and reproducing issues in the development and staging environments. A common practice is to use different database versions in development and production environments; this is a violation. In a containerized environment, make sure that you are using the same container image across environments to maintain parity. It's important to have the capability to recreate the production environment with the least amount of effort to achieve parity, reproducibility, and disposability. DevOps practices are crucial for achieving dev/prod parity, where the dev and ops teams work together to release features with continuous deployment. They use the same set of tools in all environments to observe their behavior to find issues.

Logs

The twelve-factor app methodology treats logs as event streams. It never concerns itself with how these events are routed, processed, or stored. In a traditional monolithic application, logging libraries are used to allow applications to produce log files. Cloud-native development differs from traditional development, where the application is built as a microservice. A single request may be processed by multiple microservices before it can produce a meaningful result. Monitoring and tracking the collaboration is important to provide observability of the overall system. The distributed nature of the system makes logging a daunting task that should be handled in a unified manner to support consistency and scalability. The twelve-factor app (microservice) doesn't contain a logging library as its dependency; instead, it uses a logging agent as a separate process to stream logs. For example, these events can be sent to Azure Event Hub for long-term archival and automation, or Application Insights for monitoring or raising alerts.

Admin processes

Design admin/management processes with the same rigor that you use to develop and run any other application. It should run in the same environment alongside other processes while following the same practices discussed earlier as part of the twelve-factor app methodology. Admin processes should be maintained inside the code base, with the config released to environments using the build, release, and run practice. Database migration, running ad hoc scripts, and purging data are all examples of admin processes.

The twelve-factor app is one of the most widely adopted methodologies for building cloud-native applications, and it aims to help architects and developers follow the procedures that are aligned with best practices and standards, especially while building microservices. In the next section, we will explore a few additional factors that are essential for building cloud-native applications.

Additional factors for modern cloud-native apps

In Beyond the Twelve-Factor App, Kevin Hoffman discussed three additional factors that are important for building cloud-native applications in the modern era. Let's take a look.

API first

In the API first approach, applications are designed with the intent to be consistent and reusable. The API description language helps in providing a contract that dictates the behavior of an API. For example, Swagger provides the tooling for developing APIs with the OpenAPI specification.

Telemetry

In a cloud-native environment, teams have less control over the execution environment. They are dependent on the capabilities provided by the cloud vendor. Installing debuggers and inspectors is not an option, which makes real-time application monitoring and telemetry a challenge. To address telemetry, you need to make sure that you are collecting system/health data, application performance data, and domain-specific data so that you have complete visibility of the system in near real time.

Security

Security is an important pillar of a well-architected cloud-native application. The cloud-native application is built to run in multiple data centers to serve customers across the globe. They need to ensure that the clients accessing these applications are authenticated and authorized. These applications should identify every requestor and grant access based on their roles. At times, these requests are encrypted using Transport Layer Security (TLS) to ensure data privacy while data is in transit. Teams also need to think about securing their data at rest with the capabilities available in different storage platforms. As an example, Azure Cosmos DB and an Azure SQL server allow us to encrypt data at rest.

The preceding 12 factors and the three additional factors we've looked at here have given us a thorough understanding of building cloud-native microservices with an API first mindset. In the cloud-native world, security and telemetry are no longer an afterthought. Approaching security with a zero-trust mindset is essential, while continuously collecting data to monitor system health gives you the ability to make the necessary adjustments to achieve higher availability for the systems.

Summary

In this chapter, we learned about various aspects of setting the mindset for starting your microservices journey. We dissected the microservice architecture into its components and understood the design principles for building microservices. Adopting microservices has several benefits, which essentially help organizations bring business value faster to their end users. We also discussed the challenges of adopting microservices and how these can be addressed.

Later, we discussed the role of leadership, where technology leaders have the responsibility of bringing clarity to drive the overall initiative. Technology leaders bring the right level of focus to empower teams by enabling autonomy and invest in their growth. Architects work with stakeholders to address business requirements to shape the architecture. Finally, we discussed how defining the core priorities for businesses can affect different architecture decisions as you move through your microservices journey. We also discussed the importance of going cloud-native and adopting the twelve-factor app methodology to build microservices. With this knowledge, you can start planning for the adoption of microservices by investing in the right areas to build organizational capabilities.

In the next chapter, we will discuss the role of understanding domain-driven design, anti-patterns, and how to address those anti-patterns. Moreover, we will learn about the importance of domain-driven design, common pitfalls, and some best practices that are essential when building a microservice architecture.

Questions

  1. What are the design principles of microservices?
  2. What are the architecture components of microservices?
  3. How can you build highly scalable, maintainable, and portable microservices so that they're cloud native?
Left arrow icon Right arrow icon
Download code icon Download Code

Key benefits

  • Implement the right microservices adoption strategy to transition from monoliths to microservices
  • Explore real-world use cases that explain anti-patterns and alternative practices in microservices development
  • Discover proven recommendations for avoiding architectural mistakes when designing microservices

Description

Microservices have been widely adopted for designing distributed enterprise apps that are flexible, robust, and fine-grained into services that are independent of each other. There has been a paradigm shift where organizations are now either building new apps on microservices or transforming existing monolithic apps into microservices-based architecture. This book explores the importance of anti-patterns and the need to address flaws in them with alternative practices and patterns. You'll identify common mistakes caused by a lack of understanding when implementing microservices and cover topics such as organizational readiness to adopt microservices, domain-driven design, and resiliency and scalability of microservices. The book further demonstrates the anti-patterns involved in re-platforming brownfield apps and designing distributed data architecture. You’ll also focus on how to avoid communication and deployment pitfalls and understand cross-cutting concerns such as logging, monitoring, and security. Finally, you’ll explore testing pitfalls and establish a framework to address isolation, autonomy, and standardization. By the end of this book, you'll have understood critical mistakes to avoid while building microservices and the right practices to adopt early in the product life cycle to ensure the success of a microservices initiative.

Who is this book for?

This practical microservices book is for software architects, solution architects, and developers involved in designing microservices architecture and its development, who want to gain insights into avoiding pitfalls and drawbacks in distributed applications, and save time and money that might otherwise get wasted if microservices designs fail. Working knowledge of microservices is assumed to get the most out of this book.

What you will learn

  • Discover the responsibilities of different individuals involved in a microservices initiative
  • Avoid the common mistakes in architecting microservices for scalability and resiliency
  • Understand the importance of domain-driven design when developing microservices
  • Identify the common pitfalls involved in migrating monolithic applications to microservices
  • Explore communication strategies, along with their potential drawbacks and alternatives
  • Discover the importance of adopting governance, security, and monitoring
  • Understand the role of CI/CD and testing

Product Details

Country selected
Publication date, Length, Edition, Language, ISBN-13
Publication date : Oct 29, 2021
Length: 306 pages
Edition : 1st
Language : English
ISBN-13 : 9781801813495
Vendor :
Microsoft
Category :

What do you get with eBook?

Product feature icon Instant access to your Digital eBook purchase
Product feature icon Download this book in EPUB and PDF formats
Product feature icon Access this title in our online reader with advanced features
Product feature icon DRM FREE - Read whenever, wherever and however you want
Product feature icon AI Assistant (beta) to help accelerate your learning

Product Details

Publication date : Oct 29, 2021
Length: 306 pages
Edition : 1st
Language : English
ISBN-13 : 9781801813495
Vendor :
Microsoft
Category :

Packt Subscriptions

See our plans and pricing
Modal Close icon
$19.99 billed monthly
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Simple pricing, no contract
$199.99 billed annually
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just S$6 each
Feature tick icon Exclusive print discounts
$279.99 billed in 18 months
Feature tick icon Unlimited access to Packt's library of 7,000+ practical books and videos
Feature tick icon Constantly refreshed with 50+ new titles a month
Feature tick icon Exclusive Early access to books as they're written
Feature tick icon Solve problems while you work with advanced search and reference features
Feature tick icon Offline reading on the mobile app
Feature tick icon Choose a DRM-free eBook or Video every month to keep
Feature tick icon PLUS own as many other DRM-free eBooks or Videos as you like for just S$6 each
Feature tick icon Exclusive print discounts

Frequently bought together


Stars icon
Total S$ 180.97
Enterprise DevOps for Architects
S$74.99
Embracing Microservices Design
S$52.99
Software Architecture for Busy Developers
S$52.99
Total S$ 180.97 Stars icon

Table of Contents

15 Chapters
Section 1: Overview of Microservices, Design, and Architecture Pitfalls Chevron down icon Chevron up icon
Chapter 1: Setting Up Your Mindset for a Microservices Endeavor Chevron down icon Chevron up icon
Chapter 2: Failing to Understand the Role of DDD Chevron down icon Chevron up icon
Chapter 3: Microservices Architecture Pitfalls Chevron down icon Chevron up icon
Chapter 4: Keeping the Replatforming Brownfield Applications Trivial Chevron down icon Chevron up icon
Section 2: Overview of Data Design Pitfalls, Communication, and Cross-Cutting Concerns Chevron down icon Chevron up icon
Chapter 5: Data Design Pitfalls Chevron down icon Chevron up icon
Chapter 6: Communication Pitfalls and Prevention Chevron down icon Chevron up icon
Chapter 7: Cross-Cutting Concerns Chevron down icon Chevron up icon
Section 3: Testing Pitfalls and Evaluating Microservices Architecture Chevron down icon Chevron up icon
Chapter 8: Deployment Pitfalls Chevron down icon Chevron up icon
Chapter 9: Skipping Testing Chevron down icon Chevron up icon
Chapter 10: Evaluating Microservices Architecture Chevron down icon Chevron up icon
Assessments Chevron down icon Chevron up icon
Other Books You May Enjoy Chevron down icon Chevron up icon

Customer reviews

Most Recent
Rating distribution
Full star icon Full star icon Full star icon Full star icon Half star icon 4.6
(14 Ratings)
5 star 64.3%
4 star 35.7%
3 star 0%
2 star 0%
1 star 0%
Filter icon Filter
Most Recent

Filter reviews by




Valkyrea Sep 05, 2023
Full star icon Full star icon Full star icon Full star icon Full star icon 5
An incredible resource if you’re looking to get into the world of microservices from the experts themselves!!
Amazon Verified review Amazon
Calvin Feb 03, 2022
Full star icon Full star icon Full star icon Full star icon Empty star icon 4
This book is for architects and developers who are involved in designing microservicesand related architectures, who are implementing microservices design. This book is well-written, easy to read, and covers a wide range of up-to-date topics: architecture, microservices, event messaging, domain driven design, etc.The philosophy of microservices is discussed, the various mindsets that need while adopting a microservices mindset, understanding the role of Domain Driven Design. Overview of data design is discussed, pitfalls of wrong data design are discussed, using a single database versus multiple, using sharding to scale the data problem.This book is a great guidebook for how to deploy microservices in an organization that plans to move from monolithic implementation to distributed method of deployment. The users are architects and developers from the microservices space. This is an excellent book for anyone who is interested to learn about microservices design patterns, or who is planning to work on microservices on their new project, or who is looking to move their existing monolithic app into microservices architecture design.I will highly recommend reading this book before you think about the technical design and implementation of microservices.
Amazon Verified review Amazon
Math_Ninja Feb 02, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
It is a good book for the cloud architect. The authors do a good job of describing the limitations and strengths of various architectures. In particular, the book is not tied to any particular technology or vendor. This book can be used as a tutorial or as a reference book for designing microservice architectures. I liked the discussion about performance regarding various choices.
Amazon Verified review Amazon
M. Jan 20, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
1.) Heavy focus on avoiding pitfalls so you don't have to learn microservices the hard way.2.) Lays out the key principles and reasoning behind DDD3.) Walks you through real-world scenarios and how to execute a successful, secure deployment.4.) Relevant for any software engineer or app developer working in or around a DevOps environment.
Amazon Verified review Amazon
Shopaholic Jan 11, 2022
Full star icon Full star icon Full star icon Full star icon Full star icon 5
This is well organized and well written book on Microservices. Not your usual cup of tea, as it points to the pitfalls and reasons why some of the organizations fail to implement the microservices architecture correctly in the first place. Great deal for anybody who is diving into the world of microservices. Very well organized, while there are not many - great illustrations when needed and brings out the most important concepts when it comes to the topic and how to avoid problems when implementing them. Highly recommended read from these authors!!!
Amazon Verified review Amazon
Get free access to Packt library with over 7500+ books and video courses for 7 days!
Start Free Trial

FAQs

How do I buy and download an eBook? Chevron down icon Chevron up icon

Where there is an eBook version of a title available, you can buy it from the book details for that title. Add either the standalone eBook or the eBook and print book bundle to your shopping cart. Your eBook will show in your cart as a product on its own. After completing checkout and payment in the normal way, you will receive your receipt on the screen containing a link to a personalised PDF download file. This link will remain active for 30 days. You can download backup copies of the file by logging in to your account at any time.

If you already have Adobe reader installed, then clicking on the link will download and open the PDF file directly. If you don't, then save the PDF file on your machine and download the Reader to view it.

Please Note: Packt eBooks are non-returnable and non-refundable.

Packt eBook and Licensing When you buy an eBook from Packt Publishing, completing your purchase means you accept the terms of our licence agreement. Please read the full text of the agreement. In it we have tried to balance the need for the ebook to be usable for you the reader with our needs to protect the rights of us as Publishers and of our authors. In summary, the agreement says:

  • You may make copies of your eBook for your own use onto any machine
  • You may not pass copies of the eBook on to anyone else
How can I make a purchase on your website? Chevron down icon Chevron up icon

If you want to purchase a video course, eBook or Bundle (Print+eBook) please follow below steps:

  1. Register on our website using your email address and the password.
  2. Search for the title by name or ISBN using the search option.
  3. Select the title you want to purchase.
  4. Choose the format you wish to purchase the title in; if you order the Print Book, you get a free eBook copy of the same title. 
  5. Proceed with the checkout process (payment to be made using Credit Card, Debit Cart, or PayPal)
Where can I access support around an eBook? Chevron down icon Chevron up icon
  • If you experience a problem with using or installing Adobe Reader, the contact Adobe directly.
  • To view the errata for the book, see www.packtpub.com/support and view the pages for the title you have.
  • To view your account details or to download a new copy of the book go to www.packtpub.com/account
  • To contact us directly if a problem is not resolved, use www.packtpub.com/contact-us
What eBook formats do Packt support? Chevron down icon Chevron up icon

Our eBooks are currently available in a variety of formats such as PDF and ePubs. In the future, this may well change with trends and development in technology, but please note that our PDFs are not Adobe eBook Reader format, which has greater restrictions on security.

You will need to use Adobe Reader v9 or later in order to read Packt's PDF eBooks.

What are the benefits of eBooks? Chevron down icon Chevron up icon
  • You can get the information you need immediately
  • You can easily take them with you on a laptop
  • You can download them an unlimited number of times
  • You can print them out
  • They are copy-paste enabled
  • They are searchable
  • There is no password protection
  • They are lower price than print
  • They save resources and space
What is an eBook? Chevron down icon Chevron up icon

Packt eBooks are a complete electronic version of the print edition, available in PDF and ePub formats. Every piece of content down to the page numbering is the same. Because we save the costs of printing and shipping the book to you, we are able to offer eBooks at a lower cost than print editions.

When you have purchased an eBook, simply login to your account and click on the link in Your Download Area. We recommend you saving the file to your hard drive before opening it.

For optimal viewing of our eBooks, we recommend you download and install the free Adobe Reader version 9.