Search icon CANCEL
Arrow left icon
Explore Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Conferences
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Practical Cloud-Native Java Development with MicroProfile

You're reading from   Practical Cloud-Native Java Development with MicroProfile Develop and deploy scalable, resilient, and reactive cloud-native applications using MicroProfile 4.1

Arrow left icon
Product type Paperback
Published in Sep 2021
Publisher Packt
ISBN-13 9781801078801
Length 404 pages
Edition 1st Edition
Languages
Arrow right icon
Authors (5):
Arrow left icon
Alasdair Nottingham Alasdair Nottingham
Author Profile Icon Alasdair Nottingham
Alasdair Nottingham
John Alcorn John Alcorn
Author Profile Icon John Alcorn
John Alcorn
David Chan David Chan
Author Profile Icon David Chan
David Chan
Emily Jiang Emily Jiang
Author Profile Icon Emily Jiang
Emily Jiang
Andrew McCright Andrew McCright
Author Profile Icon Andrew McCright
Andrew McCright
+1 more Show less
Arrow right icon
View More author details
Toc

Table of Contents (18) Chapters Close

Preface 1. Section 1: Cloud-Native Applications
2. Chapter 1: Cloud-Native Applications FREE CHAPTER 3. Chapter 2: How Does MicroProfile Fit into Cloud-Native Application Development? 4. Chapter 3: Introducing the IBM Stock Trader Cloud-Native Application 5. Section 2: MicroProfile 4.1 Deep Dive
6. Chapter 4: Developing Cloud-Native Applications 7. Chapter 5: Enhancing Cloud-Native Applications 8. Chapter 6: Observing and Monitoring Cloud-Native Applications 9. Chapter 7: MicroProfile Ecosystem with Open Liberty, Docker, and Kubernetes 10. Section 3: End-to-End Project Using MicroProfile
11. Chapter 8: Building and Testing Your Cloud-Native Application 12. Chapter 9: Deployment and Day 2 Operations 13. Section 4: MicroProfile Standalone Specifications and the Future
14. Chapter 10: Reactive Cloud-Native Applications 15. Chapter 11: MicroProfile GraphQL 16. Chapter 12: MicroProfile LRA and the Future of MicroProfile 17. Other Books You May Enjoy

Exploring cloud-native application architectures

Since 2019, there has been increasing discussion in the industry about the pros and cons of microservices as a cloud-native application architecture. This has been driven by many microservice-related failures and as a result, people are now discussing whether some applications would be better off using different architectures. There has even been the start of a renaissance around the idea of building monoliths, after several years of those kinds of applications being seen as an anti-pattern.

While it is attractive to think of cloud-native as just being a technology choice, it is important to understand how the development processes, organization structure, and culture affect the evolution of cloud-native applications, the system architecture, and any ultimate success. Conway's Law states the following:

Any organization that designs a system will produce a design whose structure is a copy of the organization's communication structure.

A simple way of thinking of this is if your development organization is successful at building monoliths, it is unlikely to be successful at building microservices without some kind of reorganization. That doesn't mean every team wanting to do cloud-native should go out and reorganize; it means that you should understand your strengths and weaknesses when deciding what architecture to adopt. You should also be open to reorganizing if necessary.

This section discusses a number of the more popular cloud-native application architectures out there and the pros and cons of using them. Let's start with microservices.

Microservices

Although Netflix didn't invent the idea of microservices, their use of the architecture did popularize it. A single microservice is designed to do one thing. It doesn't, despite the name, mean that service is small or lightweight – a single microservice could be millions of lines of code, but the code in the microservice has a high level of cohesion. A microservice would never handle ATM withdrawals and also sell movie tickets. Identifying the best way to design a cloud-native application into a series of well-designed microservices is not a simple task; different people might take different views of whether a deposit into and withdrawal from a bank account would warrant a single microservice or two.

Microservices usually integrate with each other via REST interfaces or messaging systems, although gRPC and GraphQL are growing in popularity. A web-facing microservice is likely to use a REST or GraphQL interface, but an internal one is more likely to use a messaging system such as Apache Kafka. Messaging systems are generally very resilient to network issues, since once the messaging system has accepted the message, it will store the message until it can be successfully processed.

The key promise of the microservice-based architecture is that each microservice can be independently deployed, updated, and scaled, allowing teams that own disparate microservices to work in parallel, making updates without the need to coordinate. This is perhaps the biggest challenge with microservice architectures. It is relatively common for well-meaning developers who set out to build microservices to end up building a distributed monolith instead. This often occurs because of poorly defined and poorly documented APIs between services and insufficient acceptance testing, resulting in a lack of trust in updating a single microservice without impacting the others. This is called a distributed monolith because you end up with all the disadvantages of a monolith and microservices and miss out on the benefits.

In an ideal world, a development organization building microservices will align the microservices with an individual development team. This may be difficult if there are more microservices than development teams. As the number of microservices a team manages increases, more time will be spent managing the services rather than evolving them.

Monoliths

Monoliths are strongly associated with pre-cloud application architectures and are considered an anti-pattern for cloud-native applications. For that reason, it might seem strange that this appears in a discussion of cloud-native architecture. However, there are some reasons for including them.

The first is really just the reality that monoliths are the simplest kind of application to build. While the individual services cannot be independently scaled, as long as the monolith has been designed to scale, this may not be an issue.

The second is that there are a lot of monoliths out there and many enterprises are moving them to the cloud. MicroProfile provides additional APIs to retrofit many cloud-native behaviors into an existing app.

The trick with a monolith is ensuring that despite the colocation of services in a single deployment artifact, the monolith can start quickly enough to enable dynamic scaling and restart if there is an application failure.

Typically, a small development organization will benefit from monoliths since there is only a single application to build, deploy, and manage.

Macroservices

Macroservices sit somewhere between a monolith and a microservice architecture and are also referred to as modular monoliths. With macroservices, the services are combined into a small number of monoliths that interoperate in the same way that a series of microservices would.

This provides many of the benefits of microservices but significantly simplifies the operations environment since there are fewer things to manage. If a macro-service has been written well, then individual services in that macro-service can be broken out if they would benefit from an independent life cycle. A well-known example of a macro-service is Stack Overflow. Stack Overflow (https://www.infoq.com/news/2015/06/scaling-stack-overflow/) is famously a monolith except for the tagging capability, which is handled in another application due to the different performance needs. This split moves it from being a pure monolith into the realm of macroservices (although Stack Overflow uses the term monolith-plus).

This architecture can work especially well when a development organization is organized into a smaller number of teams than the number of services.

Function as a Service

Function as a Service (FaaS), often referred to as serverless, is an architecture where a service is created as a function that is run when an event occurs. The function is intended to be fast starting and fast executing and can be triggered by things such as HTTP requests or messages being received. FaaS promises that you can deploy the function to a cloud, and it is started and executed by the event trigger, rather than having to have the function running just in case. Typically, public cloud providers that support FaaS only charge for the time the function is running. This is very attractive if the event is relatively uncommon since there is no financial cost in having a system running for when an uncommon event occurs.

The challenge with this architecture is that your function needs to be able to start quickly and usually has to finish executing quickly too; as a result, it isn't suitable for long-running processes. It also doesn't remove the server; the server is still there. Instead, it just shifts the cost from the developer to the cloud provider. If the cloud provider is a public cloud, then that is their problem, since they are charging for the function runtime, but if you are deploying to a private cloud, this becomes your problem, thereby removing some of the benefits.

Event sourcing

Often, we think of services as providing a REST endpoint, and services make calls to them. In fact, factor VII of the Twelve-Factor App (discussed in the next section) explicitly states this. The problem with this approach is that a REST call is implicitly synchronous and prone to issues if the service provider is running slow or failing.

When providing an external API to a mobile app or a web browser, a REST API is often the best option. However, for services within an enterprise, there are many benefits to using a messaging system such as Kafka and using asynchronous events instead. A messaging system that can guarantee that the message will be delivered allows the client and service to be decoupled such that an issue with the service provider doesn't prevent the request from occurring; it just means it'll be processed later. A one-to-many event system makes it easy for a single service to trigger multiple different actions with just a simple message send. Different actions can be taken by different services receiving a copy of the message and if new behavior is required, an additional service can receive the same message without having to change the sending service. A simple illustration of this might be that an event that orders an item can be processed by the payment service, the dispatch service, a reorder service, and a recommendation service that provides recommendations based on past purchases.

One of the trends with cloud-native applications is that data is moved from a centralized data store closer to the individual services. Each service operates on data it holds, so if something happens to slow down the data store for one service, it doesn't have a knock-on effect on others. This means that new mechanisms are required to ensure data consistency. Using events to handle data updates helps with this, since a single event can be distributed to every service that needs to process the update independently. The updates can take effect even if the service is down when the update is triggered. Another advantage of this approach is that if the data store fails, it can be reconstructed by replaying all the events.

Having chosen the architecture (or architectures) for building your cloud-native application, the next step is to start building it, and to do that, it is a good idea to understand some of the industry best practices around cloud-native application development.

You have been reading a chapter from
Practical Cloud-Native Java Development with MicroProfile
Published in: Sep 2021
Publisher: Packt
ISBN-13: 9781801078801
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at $19.99/month. Cancel anytime