Search icon CANCEL
Subscription
0
Cart icon
Cart
Close icon
You have no products in your basket yet
Save more on your purchases!
Savings automatically calculated. No voucher code required
Arrow left icon
All Products
Best Sellers
New Releases
Books
Videos
Audiobooks
Learning Hub
Newsletters
Free Learning
Arrow right icon
Arrow up icon
GO TO TOP
Designing Hexagonal Architecture with Java

You're reading from  Designing Hexagonal Architecture with Java

Product type Book
Published in Jan 2022
Publisher Packt
ISBN-13 9781801816489
Pages 460 pages
Edition 1st Edition
Languages
Author (1):
Davi Vieira Davi Vieira
Profile icon Davi Vieira
Toc

Table of Contents (21) Chapters close

Preface 1. Section 1: Architecture Fundamentals
2. Chapter 1: Why Hexagonal Architecture? 3. Chapter 2: Wrapping Business Rules inside Domain Hexagon 4. Chapter 3: Handling Behavior with Ports and Use Cases 5. Chapter 4: Creating Adapters to Interact with the Outside World 6. Chapter 5: Exploring the Nature of Driving and Driven Operations 7. Section 2: Using Hexagons to Create a Solid Foundation
8. Chapter 6: Building the Domain Hexagon 9. Chapter 7: Building the Application Hexagon 10. Chapter 8: Building the Framework Hexagon 11. Chapter 9: Applying Dependency Inversion with Java Modules 12. Section 3: Becoming Cloud-Native
13. Chapter 10: Adding Quarkus to a Modularized Hexagonal Application 14. Chapter 11: Leveraging CDI Beans to Manage Ports and Use Cases 15. Chapter 12: Using RESTEasy Reactive to Implement Input Adapters 16. Chapter 13: Persisting Data with Output Adapters and Hibernate Reactive 17. Chapter 14: Setting Up Dockerfile and Kubernetes Objects for Cloud Deployment 18. Chapter 15: Good Design Practices for Your Hexagonal Application 19. Assessments 20. Other Books You May Enjoy

Reviewing software architecture

The word architecture is old. Its origin in history goes back to times when man used to build things with rudimentary tools, often with his own hands. Yet, each generation repeatedly overcame the limitations of its era and constructed magnificent buildings that stand to this day: take a look at the Florence Cathedral and its dome design, which was conceived by Filippo Brunelleschi – what an excellent architecture example!

Architects are more than just ordinary builders who build things without much thinking. It's quite the opposite – they are the ones who care the most about the aesthetics, underlying structures, and design principles. Sometimes, they play a fundamental role by pushing the limits of what is possible to do with the resources at hand. The Florence Cathedral, as has already been said, proves that point.

I'll not take this analogy too far because software is not like a physical building. And, although there should be some similarities between building and software architects, the latter differs considerably because of the living and evolving nature of their software craft. But we can agree that both share the same ideal: to build things right.

This ideal helps us understand what software architecture is. If we're aiming to build not just working software, but an easily maintainable and well-structured one, the software can even be considered, to a certain degree, a piece of art because of the care and attention to details we employed to build it. Then, we can take that as a noble definition for software architecture.

It's also important to state that a software architect's role should not only be constrained to decide how things should be made. As in the Florence Cathedral example, where Filippo Brunelleschi himself laid bricks in the building to prove his ideas were sound; a software architect should get his hands dirty to prove his architecture is good.

Software architecture should not be the fruit of one person's mind. Although there are a few who urge others to pursue a path of technical excellence by providing guidance and establishing the foundations, for architecture to evolve and mature, it's necessary to have the collaboration and experience of everyone involved in the effort to improve software quality.

What follows is a discussion around the technical and organizational challenges we may encounter in our journey to create and evolve a software architecture. This will help us tackle the threat of chaos and indomitable complexity.

The invisible things

Software development is not a trivial activity. It demands considerable effort to become competent in any programming language and an even greater effort to use that skill to build software that generates profit. Surprisingly, sometimes, it may not be just enough to make profitable software.

When we talk about profitable software, we're talking about software that solves real-world problems. In the context of large enterprises, it's software that meets business needs. Also, everyone who has worked in large enterprises understands that the client generally doesn't want to know how the software is built. They are interested in what they can see: a working software meeting business expectations. After all, that's what pays bills at the end of the day.

But the things that clients cannot see also have some importance. Such things are known as non-functional requirements. They are things related to security, maintainability, operability, and other capabilities. If adequate care is not taken, those things, which are unseen from the clients' perspective, can compromise the software's purpose. That compromise can occur subtly and gradually, giving origin to several problems, including technical debt.

I've mentioned previously that software architecture is about doing things right. So, it means that among its concerns, we should include both unseen and seen things. For things that are seen by the client, it's essential to deeply understand the problem domain. That's where techniques such as Domain Driven Design (DDD) can help us approach the problem. This allows us to structure the software in a form that makes sense not just for programmers but also for everyone involved in the problem domain. DDD also plays a key role in shaping the unseen part by defining cohesively the underlying structures that will allow us to solve client needs, which it does in a well-structured and maintainable manner.

Technical debt

Coined by Ward Cunningham, technical debt is a term used to describe how much unnecessary complexity exists in software code. Such unnecessary complexity may also be referred to as cruft – that's the difference between the current code and how it would ideally be. We'll learn how technical debt can appear in a software project shortly.

To develop software that just works is one thing. You assemble code in a way you think is adequate to meet business needs, and then package and throw it into production. In production, your software meets the client's expectations, so everything is fine, and life goes on. Sometime later, another developer comes in to add new features to that same software you started. Like you, this developer assembles code in a way he thinks is adequate to meet business needs, but there are things in your code this developer doesn't clearly understand. Hence, he adds elements to the software in a slightly different manner than you would. The software makes its way into production, and the customer is satisfied. So, the cycle repeats.

Software working as expected is what we can see from the previous scenario. But what we cannot see so clearly is that the lack of common ground, in terms of defining how features should be added or modified to the software, leaves a gap that every developer will try to fill whenever he does not know how to handle such changes. This gap leaves space for the growth of things such as technical debt.

Reality very often pushes us to situations where we just cannot avoid technical debt. Tight schedules, poor planning, unskilled people, and, of course, the lack of software architecture are some of the factors that can contribute to the creation of technical debt. Needless to say, we should not believe that the enforcement of software architecture will magically solve all our technical debt problems. Far from that – here, we're just tackling one facet of the problem. All other technical debt factors will remain and can undermine our efforts to establish a sound software architecture.

Vicious cycle

Financial debts tend to continue to grow if you do pay them. Also, the bank and authorities can come after you and your assets if you don't pay those debts in time. Contrary to its financial counterpart, technical debts don't necessarily grow if you don't pay them. What determines their growth, though, is the rate and nature of software changes. Based on that, we can assume that frequent and complex changes have a higher potential to increase technical debt.

You always have the prerogative not to pay technical debts – sometimes, that's the best choice, depending on the circumstances – but you diminish your capacity to change the software as you do so. With higher technical debt rates, the code becomes more and more unmanageable, causing developers to either avoid touching the code at all or finding awkward workarounds to solve the issues.

I believe most of us at least once had the unpleasant experience of maintaining brittle, insanely complex systems. In such scenarios, instead of spending time working with valuable things for the software, we spend more time fighting technical debts to open space to introduce new features. If we don't keep the technical debts controlled, one day, it will not be worth adding new features to the overloaded technical debt system. That's when people decide to abandon applications, start a new one, and repeat the cycle. So, the effort in tackling technical debt should be motivated to break that cycle.

It's not for everyone

This zest for quality and correctness that emerges from any serious architectural undertaking is not always present. As pointed out by big ball of mud, there are scenarios where the most profit-driven software in a company is an absolute big ball of mud. This is software that has grown without any sense of order and is complicated to understand and maintain. Developers who dare to tackle the complexity posed by this kind of system are like warriors fighting a hydra. The refactoring effort required to impose any order in such complexity is sometimes not worth it.

This big ball of mud is not the only problem. There are also cultural and organizational factors that can undermine any software architecture effort. Very often, I've stumbled upon teammates who simply didn't care about architecture principles. The least-effort path to deliver code to production is the norm to be followed in their minds. It's not hard to find this kind of person in projects with a high turnaround of developers. Because there is no sense of ownership, there is no incentive to produce high-quality code.

Pushing the discipline to follow a software architecture is hard. Both the technical team and management should be aligned on the advantages and implications of following such a discipline. It's important to understand that spending more time upfront on dealing with technical aspects that don't add much value, in terms of customer features, may play a crucial role in the long term. All the effort is paid back with more maintainable software, relieving developers who no longer need to fight hydras, and managers who are now better positioned to meet business deadlines.

Before trying to promote, let alone enforce, any software architecture principle, it is advisable to assess the current environment to make sure there are neither cultural nor organizational factors playing against the attitude of a few trying to raise the bar to better-developed systems.

Monolithic or distributed

There is a recurring discussion in the software community about the organization of a system's components and responsibilities. In the past, when expensive computing resources and network bandwidth were the problems that influenced the software architecture, developers tended to group plenty of responsibilities into a single software unit to optimize resource usage and prevent the network overhead that would occur in a distributed environment. But there is a tenuous line separating a maintainable and cohesive monolithic system from an entangled and hard-to-maintain one.

The crossing of such a line is a red flag, showing the system has accumulated so many responsibilities and has become so complex to maintain that any change poses a severe risk of breaking the entire software. I'm not saying that every monolithic that grows becomes a mess. I'm trying to convey that the accumulation of responsibilities can cause serious problems to a monolithic system when such responsibility aggregation is not done with care. Apart from this responsibility issue, it's also equally important to make sure the software is easy to develop, test, and deploy. If the software is too large, developers may have difficulty trying to run and test it locally. It can also have a serious impact on continuous integration pipelines, impacting the compiling, testing, and deployment stages of such pipelines, ultimately compromising the feedback loop that is so crucial in a DevOps context.

On the other hand, if we know when a system accumulates sufficient responsibilities, we can rethink the overall software architecture and break down the large monolithic into smaller and more manageable – sometimes autonomous – software components that are often isolated in runtime environments. This approach had strong adoption with Service Oriented Architecture (SOA) and then with what can be called its evolution: the microservice architecture.

Both SOA and microservices can be considered different flavors of distributed systems. Microservice architecture, in particular, is made possible mainly because computing and network resources are not as expensive as they used to be, bringing lots of benefits related to strong decoupling and faster software delivery. However, this does not come without costs, because if we had to deal with complexity in just one place, now the challenge is to deal with complexity scattered around the network.

The hexagonal architecture proposed in this book can be applied to both monolithic and distributed systems. With monolithic, the application may be consumed by a frontend and, at the same time, consume data from a database or other data sources. The hexagonal approach can help us develop a more change-tolerant monolithic system that can even be tested without the Frontend and the Database. The following diagram illustrates a common Monolithic system:

Figure 1.1 – The hexagonal architecture with a monolithic system

Figure 1.1 – The hexagonal architecture with a monolithic system

For distributed systems, we may be dealing with lots of different technologies. The hexagonal architecture shines in these scenarios because the nature of its ports and adapters allows the software to deal with constant technology changes. The following diagram shows a typical microservice architecture where we could apply hexagonal principles:

Figure 1.2 – The hexagonal architecture with a microservices system

Figure 1.2 – The hexagonal architecture with a microservices system

One of the great advantages of microservice architecture is that you can use different technologies and programming languages to compose the system. We can develop a frontend application using JavaScript, some APIs with Java, and a data processing application with Python. The hexagonal architecture can help us in this kind of heterogeneous technological scenario.

Making decisions

All this discussion around software architecture concerns is relevant because we may undermine our capability to maintain and evolve software in the long run if we ignore those concerns. Of course, there are situations where we're not so ambitious about how sophisticated, maintainable, and feature-rich our software will be.

It may not be worth all the time and effort to build things in the right way for such situations because what's needed is working software to be delivered as fast as possible. In the end, it's a matter of priorities. But we should be cautious not to fall into the trap that we can fix things later. Sometimes, we can have the money to do so but sometimes, we may not. Wrong decisions at the beginning of a project can cost us a high price in the long term.

The decisions we take regarding code structure and software architecture lead us to what calls internal quality. The degree to which software code is well organized and maintainable corresponds to its internal quality. On the other hand, the value perception about how valuable and good a piece of software can be from a user's perspective corresponds to its external quality. Internal and external quality are not directly connected. It's not difficult to find useful software with a messy code base.

The effort spent on internal quality should be seen as an investment where the return is not immediate and visible to the user. The investment return comes as the software evolves. The value is perceived by constantly adding changes to the software without increasing the time and money required to add such changes, as the following pseudo-graph shows:

Figure 1.3 – Pseudo-graph showing the impact of changes

Figure 1.3 – Pseudo-graph showing the impact of changes

But how can we make the right decisions? That's a trick question because we often don't have enough information to assist in the decision-making process that will lead us to a software architecture that best meets business needs. Most of the time, even the client doesn't know their needs. That information generally comes as the project evolves. Instead of making upfront decisions, a more sensible approach is to wait until enough information is received, allowing us to be more assertive. This approach naturally leads us to a software architecture that reflects these concerns, which are related to a lack of information and the necessity to accommodate changes as they occur.

That necessity and also the capacity to change systems is a crucial point in software design. If we spend too much effort thinking on designing upfront, we may end up overengineering and possibly overpriced solutions. The other way around is dangerous because we risk increasing the cost of change by being careless about design. As pointed out in Extreme Programming Explained: Embrace Change, the resources spent on design should match a system's need to process changes at an acceptable pace without increasing the cost to process such changes.

This book is concerned with a software architecture that allows us to postpone decisions by making change-tolerant applications able to cope with changes when decisions are finally made. But reality can be harsh sometimes, forcing us to make hurried decisions with scarce information. These precipitated actions can result in unpleasant consequences such as technical debt.

Now that we're aware of some of the problems related to software architecture, we're in a better position to explore possible solutions to mitigate those issues. To help us in that effort, let's start by looking into the fundamentals of hexagonal architecture.

You have been reading a chapter from
Designing Hexagonal Architecture with Java
Published in: Jan 2022 Publisher: Packt ISBN-13: 9781801816489
Register for a free Packt account to unlock a world of extra content!
A free Packt account unlocks extra newsletters, articles, discounted offers, and much more. Start advancing your knowledge today.
Unlock this book and the full library FREE for 7 days
Get unlimited access to 7000+ expert-authored eBooks and videos courses covering every tech area you can think of
Renews at ₹800/month. Cancel anytime}