Search icon CANCEL
Subscription
0
Cart icon
Your Cart (0 item)
Close icon
You have no products in your basket yet
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Hands-On Software Architecture with Golang

You're reading from   Hands-On Software Architecture with Golang Design and architect highly scalable and robust applications using Go

Arrow left icon
Product type Paperback
Published in Dec 2018
Publisher Packt
ISBN-13 9781788622592
Length 500 pages
Edition 1st Edition
Languages
Tools
Concepts
Arrow right icon
Author (1):
Arrow left icon
Jyotiswarup Raiturkar Jyotiswarup Raiturkar
Author Profile Icon Jyotiswarup Raiturkar
Jyotiswarup Raiturkar
Arrow right icon
View More author details
Toc

Table of Contents (14) Chapters Close

Preface 1. Building Big with Go 2. Packaging Code FREE CHAPTER 3. Design Patterns 4. Scaling Applications 5. Going Distributed 6. Messaging 7. Building APIs 8. Modeling Data 9. Anti-Fragile Systems 10. Case Study – Travel Website 11. Planning for Deployment 12. Migrating Applications 13. Other Books You May Enjoy

Microservices

While the theoretical concepts discussed previously have been with us for decades now, a few things have recently been changing very rapidly. There is an ever increasing amount of complexity in software products. For example, in object-oriented programming, we might start off with a clean interface between two classes, but during a sprint, under extra time pressure, a developer might cut corners and introduce a coupling between classes. Such a technical debt is rarely paid back on its own; it starts to accumulate until our initial design objective is no longer perceivable at all!

Another thing that's changing is that products are rarely built in isolation now; they make heavy use of services provided by external entities. A vivid example of this is found in managed services in cloud environments, such as Amazon Web Services (AWS). In AWS, there is a service for everything, from a database to one that enables building a chatbot.

It has become imperative that we try to enforce separation of concerns. Interactions and contracts between components are becoming increasingly Application Programming Interface (API)-driven. Components don't share memory, hence they can only communicate via network calls. Such components are called as services. A service takes requests from clients and fulfills them. Clients don't care about the internals of the service. A service can be a client for another service.

A typical initial architecture of a system is shown here:

The system can be broken into three distinct layers:

  • Frontend (a mobile application or a web page): This is what the users interact with and makes network classes go to the backend to get data and enable behavior.
  • Backend piece: This layer has the business logic for handling specific requests. This code is generally supposed to be ignorant of the frontend specifics (such as whether it is an application or a web page making the call).
  • A data store: This is the repository for persistent data.

In the early stages, when the team (or company) is young, and people start developing with a greenfield environment and the number of developers is small, things work wonderfully and there is good development velocity and quality. Developers pitch in to help other developers whenever there are any issues, since everyone knows the system components at some level, even if they're not the developer responsible for the component. However, as the company grows, the product features start to multiply, and as the team gets bigger, four significant things happen:

  • The code complexity increases exponentially and the quality starts to drop. A lot of dependencies spurt up between the current code and new features being developed, while bug fixes are made to current code. New developers don't have context into the tribal knowledge of the team and the cohesive structure of the code base starts to break.
  • Operational work (running and maintaining the application) starts taking a significant amount time for the team. This usually leads to the hiring of operational engineers (DevOps engineers) who can independently take over operations work and be on call for any issues. However, this leads to developers losing touch with production, and we often see classic issues, such as it works on my setup but fails in production.
  • The third thing that happens is the product hitting scalability limits. For example, the database may not meet the latency requirements under increased traffic. We might discover that an algorithm that was chosen for a key business rule is getting very latent. Things that were working well earlier suddenly start to fail, just because of the increased amount of data and requests.
  • Developers start writing huge amounts of tests to have quality gates. However, these regression tests become very brittle with more and more code being added. Developer productivity falls off a cliff.

Applications that are in this state are called monoliths. Sometimes, being a monolith is not bad (for example, if there are stringent performance/latency requirements), but generally, the costs of being in this state impact the product very negatively. One key idea, which has become prevalent to enable software to scale, has been microservices, and the paradigm is more generally called service-oriented architecture (SOA).

The basic concept of a microservice is simple—it's a simple, standalone application that does one thing only and does that one thing well. The objective is to retain the simplicity, isolation, and productivity of the early app. A microservice cannot live alone; no microservice is an island—it is part of a larger system, running and working alongside other microservices to accomplish what would normally be handled by one large standalone application.

Each microservice is autonomous, independent, self-contained, and individually deployable and scalable. The goal of microservice architecture is to build a system composed of such microservices.

The core difference between a monolithic application and microservices is that a monolithic application will contain all features and functions within one application (code base) deployed at the same time, with each server hosting a complete copy of the entire application, while a microservice contains only one function or feature, and lives in a microservice ecosystem along with other microservices:

Monolithic architecture

Here, there is one deployable artifact, made from one application code base that contains all of the features. Every machine runs a copy of the same code base. The database is shared and usually leads to non-explicit dependencies (Feature A requires Feature B to maintain a Table X using a specific schema, but nobody told the Feature B team!)

Contrast this with a microservices application:

Microservices-based architecture

Here, in it's canonical form, every feature is itself packaged as a service, or a microservice, to be specific. Each microservice is individually deployable and scalable and has its own separate database.

To summarize, microservices bring a lot to the table:

  • They allow us to use the componentization strategy (that is, divide and rule) more effectively, with clear boundaries between components.
  • There's the ability to create the right tool for each job in a microservice.
  • It ensures easier testability.
  • There's improved developer productivity and feature velocity.

The challenges for microservices – efficiency

A non-trivial product with microservices will have tens (if not hundreds) of microservices, all of which need to co-operate to provide higher levels of value. A challenge for this architecture is deployment—How many machines do we need?

Moore's law refers to an observation made by Intel co-founder Gordon Moore in 1965. He famously noticed that the number of transistors per square inch on integrated circuits had doubled every year since their invention, and hence, should continue to do so.

This law has more or less held true for more than 40 years now, which means that high-performance hardware has become a commodity. For many problems, throwing hardware at the problem has been an efficient solution for many companies. With cloud environments such as AWS, this is even more so the case; one can literally get more horsepower just by pressing a button:

However with the microservices paradigm, it is no longer possible to remain ignorant of efficiency or cost. Microservices would be in their tens or hundreds, with each service having multiple instances.

Besides deployment, another efficiency challenge is the developer setup—a developer needs to be able to run multiple services on their laptop in order to work on a feature. While they may be making changes in only one, they still need to run mocks/sprint-branch version of others so that one can exercise the code.

A solution that immediately comes to mind is, Can we co-host microservices on the same machine? To answer this, one of the first things to consider is the language runtime. For example, in Java, each microservice needs a separate JVM process to run, in order to enable the segregation of code. However, the JVM tends to be pretty heavy in terms of resource requirements, and worse, the resource requirements can spike, leading to one JVM process to cause others to fail due to resource hogging.

Another thing to consider about the language is the concurrency primitives. Microservices are often I/O-bound and spend a lot of time communicating with each other. Often, these interactions are parallel. If we were to use Java, then almost everything parallel needs a thread (albeit in a thread pool). Threads in Java are not lean, and typically use about 1 MB of the heap (for the stack, housekeeping data, and so on). Hence, efficient thread usage becomes an additional constraint when writing parallel code in Java. Other things to worry about include the sizing of thread pools, which degenerates into a trial-and-error exercise in many situations.

Thus, though microservices are language-agnostic, some languages are better suited and/or have better support for microservices than others. One language that stands out in terms of friendliness with microservices is Golang. It's extremely frugal with resources, lightweight, very fast, and has a fantastic support for concurrency, which is a powerful capability when running across several cores. Go also contains a very powerful standard library for writing web services for communication (as we shall see ourselves, slightly further down the line).

The challenges for microservices – programming complexity

When working in a large code base, local reasoning is extremely important. This refers to the ability of a developer to understand the behavior of a routine by examining the routine itself, rather than examining the entire system. This is an extension of what we saw previously, compartmentalization is key to managing complexity.

In a single-threaded system, when you're looking at a function that manipulates some state, you only need to read the code and understand the initial state. Isolated threads are of little use. However, when threads need to talk to each other, very risky things can happen! But by contrast, in a multi-threaded system, any arbitrary thread can possibly interfere with the execution of the function (including deep inside a library you don't even know you're using!). Hence, understanding a function means not just understanding the code in the function, but also an exhaustive cognition of all possible interactions in which the function's state can be mutated.

It's a well known fact that human beings can juggle about seven things at one time. In a big system, where there might be millions of functions and billions of possible interactions, not having local reasoning can be disastrous.

Synchronization primitives, such as mutexes and semaphores, do help, but they do come with their own baggage, including the following issues:

  • Deadlocks: Two threads requesting resources in a slightly different pattern causes both to block:
  • Priority inversion: A high priority process wait on a low-priority slow process
  • Starvation: A process occupies a resource for much more time than another equally important process

In the next section, we will see how Golang helps us to overcome these challenges and adopt microservices in the true spirit of the idea, without worrying about efficiency constraints or increased code complexity.

lock icon The rest of the chapter is locked
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime
Banner background image